https://dir.indiamart.com/impcat/sony-bullet-camera.html
https://www.indiamart.com/proddetail/sony-1080p-hd-60-g6-v-series-full-hd-bullet-camera-14242035533.html
https://www.indiamart.com/proddetail/hd-cctv-camera-27599998862.html
horizontalscaling¶
**
The Democratization of Orbital Surveillance: Comprehensive Analysis of Commercial Security Sensors for Space Situational Awareness¶
1. Introduction: The Crisis of Orbital Congestion¶
The near-Earth space environment is currently undergoing the most radical transformation since the launch of Sputnik 1 in 1957. For the majority of the space age, the population of Resident Space Objects (RSOs) grew at a relatively linear rate, governed by the launch cadences of state-sponsored programs. However, the advent of "New Space"—characterized by reusable launch vehicles, miniaturized satellite buses (CubeSats), and the commercial imperative for global connectivity—has shifted this growth trajectory into an exponential curve. In 2010, the number of active satellites in orbit was fewer than 1,000. By late 2023, this figure had surpassed 9,000, driven largely by the deployment of mega-constellations such as SpaceX’s Starlink and OneWeb.1 Filings with the International Telecommunication Union (ITU) suggest a potential population of over 500,000 satellites in Low Earth Orbit (LEO) within the coming decades.1
This densification of the orbital environment precipitates a severe crisis in Space Situational Awareness (SSA). The fundamental requirement of SSA is to maintain "custody"—continuous, precise tracking—of all anthropogenic objects to predict and prevent collisions. The "Kessler Syndrome," a theoretical scenario where the density of objects in LEO is high enough that collisions between objects could cause a cascade in which each collision generates debris that increases the likelihood of further collisions, has transitioned from a theoretical abstraction to a pressing operational risk.2 Current Space Surveillance Networks (SSN), primarily maintained by the U.S. Space Force and its allies, track approximately 30,000 objects larger than 10 cm in diameter.3 However, the population of "lethal non-trackable" debris—objects between 1 cm and 10 cm—is estimated to be in the hundreds of thousands. These fragments possess sufficient kinetic energy to terminate a mission or shatter a satellite, yet they remain largely invisible to legacy tracking architectures.4
The existing surveillance infrastructure relies heavily on two primary modalities: ground-based radar and large-aperture optical telescopes. Phased-array radars, such as the U.S. Space Fence, are capable of tracking objects in LEO irrespective of lighting conditions or weather.5 However, radar systems suffer from the inverse-fourth-power law ($P_r \propto P_t / R^4$), meaning the power required to detect small objects at range scales aggressively, leading to immense capital and operational costs. Conversely, optical telescopes are more sensitive and cost-effective but have traditionally been restricted to tracking objects in Geosynchronous Earth Orbit (GEO) due to their narrow Fields of View (FOV) and the rapid angular motion of LEO targets.6
A solution to this surveillance gap has emerged from an unlikely sector: the terrestrial security and automotive industries. The relentless commercial demand for high-performance night-vision cameras has driven the development of Back-Side Illuminated (BSI) Complementary Metal-Oxide-Semiconductor (CMOS) sensors. These sensors, mass-produced for surveillance cameras and dashcams, exhibit read noise levels and quantum efficiencies that rival scientific-grade instrumentation at a fraction of the cost.7 This report provides an exhaustive technical analysis of the paradigm shift towards "Small Glass" SSA—the use of distributed networks of commercial off-the-shelf (COTS) security sensors to provide high-cadence, wide-field surveillance of the LEO environment.
2. Physics and Architecture of Modern Security Sensors¶
The viability of using $20 security sensors to track satellites moving at 7.8 km/s relies entirely on the underlying physics of modern semiconductor fabrication. The transition from Charge-Coupled Devices (CCDs) to CMOS technology, specifically the architecture known as "STARVIS," represents a quantum leap in low-light imaging capabilities.
2.1 The Transition from CCD to Scientific CMOS¶
Historically, CCDs were the gold standard for astrometry and scientific imaging. They offered high Quantum Efficiency (QE), high fill factors (the percentage of the pixel area sensitive to light), and global shutters. However, CCDs require high-voltage clock signals, consume significant power, and read out pixels serially, limiting their frame rates.7 In the context of LEO surveillance, where objects cross the sky in minutes, the slow readout of CCDs results in significant dead time, reducing the probability of detection.
CMOS technology initially lagged behind CCDs in sensitivity due to the active circuitry required within each pixel, which reduced the fill factor. However, CMOS sensors allow for parallel readout architectures, where each column of pixels has its own Analog-to-Digital Converter (ADC).8 This architecture enables extremely high frame rates and low power consumption—critical attributes for remote, autonomous sensor nodes. The breakthrough that allowed CMOS to surpass CCDs in sensitivity was the development of Back-Side Illumination (BSI).
2.1.1 Back-Side Illumination (BSI) Mechanics¶
In traditional Front-Side Illuminated (FSI) sensors, the metal wiring layer that carries signals from the pixel sits on top of the photosensitive silicon substrate. Incoming photons must pass through this metal mesh before reaching the photodiode. This structure reflects or absorbs a significant portion of incident light, particularly at oblique angles, limiting the QE to approximately 50-60%.8
BSI inverts this architecture. During fabrication, the silicon wafer is thinned to a few microns, and the sensor is mounted "upside down," with the wiring layer behind the photodiode. Light enters the silicon directly without obstruction. This innovation boosts the peak QE to over 80-95%, maximizing the conversion of photons to electrons.9 For SSA applications, where the target signal is a faint reflection of sunlight against the background of the night sky, every photon counts.
2.2 Sony STARVIS: The Engine of Low-Cost SSA¶
The primary enabler of amateur and low-cost professional SSA is Sony’s STARVIS technology. Marketed explicitly for security cameras requiring visibility in "starlight" (hence the name), STARVIS sensors utilize BSI CMOS architecture to achieve extreme sensitivity.10
2.2.1 Sensor Specifications and Performance¶
Several specific sensor models have been identified in the literature as optimal for SSA applications due to their balance of sensitivity, pixel size, and cost.
| Feature | Sony IMX291 | Sony IMX307 | Sony IMX462 | Sony IMX455 |
| Market Segment | Security / Dashcam | Security (Cost-Optimized) | Security / Industrial | Scientific / Prosumer |
| Optical Format | 1/2.8" | 1/2.8" | 1/2.8" | 35mm (Full Frame) |
| Resolution | 2.1 MP (1936x1096) | 2.1 MP (1920x1080) | 2.1 MP (1936x1096) | 61 MP (9576x6388) |
| Pixel Size | 2.9 $\mu$m | 2.9 $\mu$m | 2.9 $\mu$m | 3.76 $\mu$m |
| Peak QE | ~80% (Visible) | ~80% (Visible) | >80% (High NIR) | >85% |
| Read Noise (HCG) | ~1.0 e- RMS | ~1.0 e- RMS | 0.5 - 1.0 e- RMS | 1.5 - 3.0 e- RMS |
| Shutter | Rolling | Rolling | Rolling | Rolling |
| Max Frame Rate | 120 fps | 60 fps | 120 fps | ~10 fps |
Table 1: Comparative technical specifications of relevant Sony sensors. Data synthesized from.11
-
IMX291 / IMX307: These sensors are ubiquitous in the Global Meteor Network (GMN). They feature a 2.9 $\mu$m pixel pitch, which provides a good balance between resolution and light-gathering area. The IMX307 is a cost-optimized version of the IMX291, offering similar starlight sensitivity but at a lower price point, facilitating mass deployment.15
-
IMX462: This sensor represents a newer generation of STARVIS (often referred to as STARVIS 2). It is characterized by extended sensitivity in the Near-Infrared (NIR) spectrum (800-1000 nm). Satellites often have high albedo in the NIR range, and the atmosphere is more transparent to NIR wavelengths, reducing extinction and the effects of light pollution. The IMX462’s ability to see "through" some atmospheric haze makes it particularly valuable for urban SSA nodes.17
-
IMX455: While significantly more expensive than the 1/2.8" sensors, the full-frame IMX455 offers a massive 61-megapixel resolution with read noise comparable to scientific cameras (1.5 e-). It is used in higher-end wide-field surveys where maximizing the etendue (product of aperture and field of view) is critical.13
2.3 Read Noise: The Critical Metric¶
In high-cadence imaging (25+ frames per second), the exposure time is limited to tens of milliseconds. In this regime, the background sky signal may be low, making "read noise" the dominant noise source limiting the Signal-to-Noise Ratio (SNR). Read noise is the electronic noise generated during the conversion of the charge in the pixel to a digital voltage value.
Scientific CMOS (sCMOS) cameras typically boast read noise values of 1.0 to 1.5 electrons (e-) RMS.18 Remarkably, the mass-produced IMX291 and IMX462 sensors achieve read noise levels of approximately 0.6 to 1.0 e- in High Conversion Gain (HCG) mode.12 This "sub-electron" or near-electron read noise floor allows the sensor to detect individual photons with high probability, a capability that was previously the exclusive domain of Electron Multiplying CCDs (EMCCDs) costing tens of thousands of dollars.20 This democratization of low-noise performance is the single most important factor enabling COTS-based SSA.
2.4 The Rolling Shutter Challenge and Solution¶
A significant technical challenge with these sensors is their reliance on "rolling shutter" readout. Unlike a global shutter, which exposes all pixels simultaneously, a rolling shutter exposes the sensor row by row. For a fast-moving object like a LEO satellite, the object moves significantly during the time it takes to read out the sensor (typically 15-30ms). This results in geometric distortion: a satellite traveling parallel to the readout direction will appear elongated or compressed, while one traveling perpendicular will appear slanted (the "Jello effect").21
In precise astrometry, this distortion translates to position errors. However, research by the GMN and Western Meteor Physics Group has demonstrated that this can be calibrated out. By precisely modeling the time delay between rows (typically in the microsecond range), algorithms can correct the centroid position based on the row index. This software-based correction allows rolling shutter sensors to achieve astrometric precision comparable to global shutter sensors, provided the timing of the readout is stable and known.22
3. Distributed Network Architectures: The "Small Glass" Paradigm¶
The "Small Glass" paradigm suggests that a large number of small, inexpensive sensors can outperform a single large, expensive telescope in terms of coverage and revisit rate. This distributed approach provides resilience, scalability, and the ability to maintain continuous custody of objects.
3.1 The Global Meteor Network (GMN)¶
The Global Meteor Network serves as the foundational proof-of-concept for distributed COTS surveillance. Originally established to track meteoroids and determine their heliocentric orbits, the GMN has deployed over 1,300 cameras across 42 countries.22
3.1.1 Hardware Configuration¶
A standard GMN station is a study in frugality and efficiency. It typically consists of:
-
Sensor: A Sony IMX291 or IMX307 IP camera module. These are often purchased as bare boards or simple "bullet" cameras for under $50. The infrared cut filter is usually removed to maximize sensitivity.17
-
Optics: A fast, fixed-focal-length lens, typically 3.6mm to 8mm with an aperture of f/1.0 or f/1.2. This fast aperture is crucial for collecting sufficient photons in short exposure times.
-
Processing: A Raspberry Pi (RPi) 4 single-board computer. The RPi handles video ingestion, compression, and initial event detection.2
-
Timing: To achieve scientific utility, precise timing is required. GMN stations utilize Network Time Protocol (NTP) or attached GPS modules to timestamp frames with millisecond accuracy.23
3.1.2 Dual-Use Capability¶
While designed for meteors, the GMN hardware is inherently capable of detecting satellites. Meteors are transient, plasma-generating events, while satellites are persistent, solar-reflecting objects. The difference in detection logic lies primarily in the apparent angular velocity and track duration. By adjusting the detection thresholds in the "RMS" (Raspberry Pi Meteor Station) software, the network can be tuned to log satellite passes. Research has shown that GMN cameras routinely capture LEO satellites, and with proper calibration, these detections can be used for orbit determination.22
3.2 Project Luciole: Optimized LEO Surveillance¶
Project Luciole represents the evolution of the GMN architecture specifically for SSA. Recognizing that meteor cameras are not optimized for orbital tracking (e.g., they point at the zenith rather than creating a fence), Luciole adapts the hardware for the specific kinematics of LEO objects.
3.2.1 The "Fly's Eye" Configuration¶
To overcome the limited field of view of a single sensor, Luciole employs a "fly's eye" or compound eye configuration. A prototype system described in recent AMOS (Advanced Maui Optical and Space Surveillance Technologies) conference papers consists of 14 cameras mounted on a single rigid structure.23
-
Wide-Field Layer: 8 cameras with 8mm lenses provide a broad situational awareness layer, covering approximately 100 square degrees each. This creates a "viewing bubble" that covers the entire sky above 30 degrees elevation.
-
Narrow-Field Layer: 6 cameras with 25mm lenses act as a "fence." These cameras have a smaller FOV ($17^\circ \times 10^\circ$) but higher spatial resolution (plate scale ~0.5 arcmin/px). This layer provides higher astrometric precision for objects that pass through it.
3.2.2 Performance Metrics¶
The performance of the Luciole system is disruptive.
-
Sensitivity: The system has a limiting magnitude of approximately +10 to +11. This sensitivity is sufficient to detect debris objects roughly 30 cm in size at typical LEO altitudes (400-1000 km).23
-
Cadence: Operating at 25 frames per second (Hz), the system generates high-temporal-resolution data. This allows for the extraction of detailed light curves, which can reveal the tumble rate and stability of a satellite—a metric often invisible to radar.
-
Capacity: A single site can generate over 1,500 unique satellite detections per night, comprising nearly 1 million individual metric measurements.23
-
Accuracy: Post-processing algorithms, including "track-and-stack" (discussed in Section 5), allow the system to achieve astrometric accuracy of roughly 5 arcseconds. While this is less precise than a large 1-meter telescope (which might achieve 0.5 arcseconds), the massive volume of data compensates for individual measurement noise, allowing for accurate orbital fitting.23
3.3 Stratospheric Deployment: RSONAR¶
The atmosphere is the primary enemy of optical astronomy. Clouds block observations, and turbulence ("seeing") blurs images, reducing astrometric precision. The Resident Space Object Near-space Astrometric Research (RSONAR) project bypasses these limitations by lifting COTS sensors into the stratosphere using high-altitude balloons.24
Using commercial cameras (such as PCO and IDS models) and FPGA-based processing boards (like the Xilinx PYNQ), RSONAR has demonstrated that standard industrial hardware can survive the near-space environment (low pressure, cosmic radiation, thermal cycling). From this vantage point, the sky background is darker, and atmospheric extinction is negligible, allowing small sensors to detect fainter objects than is possible from the ground. RSONAR payloads have successfully tracked over 500 RSOs during test flights, validating the concept of a sub-orbital SSA layer.2
4. Amateur and Citizen Science Contributions: "Station 3"¶
The democratization of SSA extends beyond university research projects to the amateur astronomy community. The concept of "Station 3"—historically referring to specific satellite tracking or laser ranging stations—has been adopted by amateur networks to describe advanced tracking setups.25
4.1 Satellite Laser Ranging (SLR) and Optical Tracking¶
While Satellite Laser Ranging (SLR) is typically the domain of government agencies due to the need for high-power lasers and precise timing, the "Station 3" legacy demonstrates that transportable and smaller-scale systems can contribute valuable data. Amateur observers, using high-end commercial mounts (e.g., from Astrophysics or Software Bisque) and large-aperture telescopes (11-14 inch Schmidt-Cassegrains), form a distributed network capable of high-precision measurements.
4.2 Integration with Radio Networks¶
Groups like the SatNOGS community have pioneered the crowdsourcing of satellite radio telemetry. There is a growing convergence where optical data from amateur "Station 3" setups is fused with radio Doppler data to provide a comprehensive state vector for satellites. This is particularly valuable for identifying "zombie" satellites—objects that are tumbling or transmitting intermittently.26
5. Algorithmic Frameworks: Processing the Data Deluge¶
The shift to "Small Glass" creates a "Big Data" problem. A single GMN station recording at 25 fps creates terabytes of raw video data. Transmitting this to a central server is bandwidth-prohibitive. Therefore, efficient edge computing and advanced algorithms are required to extract tracklets (timestamped position vectors) locally.
5.1 Streak Detection Algorithms¶
In a fixed-camera video feed, stars appear to rotate slowly due to Earth's rotation, while satellites appear as rapidly moving streaks. Detecting these streaks against a cluttered background is the primary image processing challenge.
5.1.1 The Hough Transform¶
The classical method for line detection is the Hough Transform. This algorithm transforms points in the image space ($x, y$) into a parameter space ($\rho, \theta$), where $\rho$ is the distance from the origin and $\theta$ is the angle. Collinear points in the image space (a streak) create a peak in the parameter space.27
While effective for bright, distinct streaks, the Hough Transform is computationally intensive ($O(N^2)$ complexity) and struggles with faint streaks that are submerged in noise or broken by inter-pixel dead zones.
5.1.2 Matched Filtering and "Track and Stack"¶
To detect fainter objects (down to magnitude +11), researchers employ "matched filtering." This involves convolving the image with a set of synthetic streak templates of varying velocities and angles. When a template matches a real satellite streak, the signal integrates constructively, boosting the SNR.29
Project Luciole utilizes a variant called "track-and-stack." The algorithm hypothesizes a series of velocity vectors for potential objects. It then shifts consecutive image frames according to these vectors and stacks them. If the shift vector matches a real object's velocity, the object's photons pile up in the same pixels, while the background noise averages out. This technique effectively increases the exposure time without trailing the object, significantly improving sensitivity.23
5.2 Astrometric Calibration and Plate Solving¶
Once a streak is detected, its pixel coordinates must be converted to celestial coordinates (Right Ascension and Declination). Security camera lenses, typically wide-angle (8mm) or fisheye, introduce massive radial distortions. A straight line in the sky appears curved on the sensor.
The processing pipeline uses "plate solving" techniques. The software identifies known stars in the image by comparing their relative positions to a star catalog (like Gaia or Tycho-2). It then generates a distortion map—a high-order polynomial function that maps $(x, y)$ pixels to sky coordinates.
Research by Vida et al. (GMN) has shown that with rigorous calibration, including corrections for atmospheric refraction and higher-order lens distortion, these wide-field systems can achieve residuals of 0.3 to 0.5 arcminutes (20-30 arcseconds) across the entire field. Narrow-field systems (25mm lenses) can achieve 4-5 arcseconds precision.22 This is critical: without this calibration, the data would be useless for orbit determination.
5.3 Deep Learning and Neural Networks¶
To reduce false positives (planes, birds, clouds), modern pipelines integrate Deep Learning models. Convolutional Neural Networks (CNNs), such as YOLO (You Only Look Once), are trained on datasets of annotated satellite streaks. These models can classify objects in real-time on the edge device (e.g., the Raspberry Pi or an NVIDIA Jetson Nano), filtering out non-orbital traffic before it reaches the astrometric solver.27
6. The Frontier: Event-Based Space Situational Awareness (EBSSA)¶
While frame-based CMOS sensors are the current standard, a disruptive technology known as "Event-Based" or "Neuromorphic" vision is emerging as a potential game-changer for SSA.6
6.1 Principles of Neuromorphic Vision¶
Standard cameras capture frames at fixed intervals (e.g., every 40ms for 25fps). This creates redundant data (the static background) and imposes a "temporal resolution" limit. Event-based cameras (such as the Prophesee or IniVation sensors) do not capture frames. instead, each pixel operates independently and asynchronously. A pixel only sends data when it detects a change in log-intensity brightness.
The output is not a video, but a stream of "events" ($x, y, t, p$), where $t$ is the timestamp (microsecond resolution) and $p$ is the polarity of the change (brighter or darker).
6.2 Advantages for SSA¶
-
High Dynamic Range (HDR): Event sensors have dynamic ranges exceeding 120 dB, allowing them to see faint satellites alongside bright stars or the moon without blooming.6
-
Temporal Resolution: Because pixels trigger instantly upon a change, the "frame rate" is effectively in the kilohertz or megahertz range. This eliminates motion blur for fast-moving LEO objects and allows for ultra-precise timing of tumbling rates.32
-
Data Efficiency: Since the dark sky background does not change, the sensor generates zero data for the empty sky. It only reports the moving satellite. This inherent data compression reduces bandwidth/storage requirements by orders of magnitude compared to frame-based video.6
Research indicates that EBSSA sensors can track objects during the daytime and provide robust detection in conditions that would saturate conventional sensors. While currently more expensive and lower resolution than STARVIS sensors, they represent the future high-end tier of distributed SSA networks.32
7. Operational Implications and Future Outlook¶
The aggregation of data from thousands of "Small Glass" sensors creates a capability that is greater than the sum of its parts.
7.1 Resilience and Orbit Determination¶
A single radar site is a single point of failure. A network of 1,000 cameras is highly resilient. If 50% of the network is clouded out, the remaining 500 sensors can still maintain the catalog. Furthermore, the geometric diversity of a global network allows for simultaneous observation of an object from multiple angles. This "triangulation" capability allows for the rapid determination of an orbit from a single pass (Initial Orbit Determination), whereas a single site usually requires multiple passes to converge on a solution.33
7.2 The Challenge of Mega-Constellations¶
Ironically, the primary targets of these systems—mega-constellations like Starlink—are also their biggest noise source. With thousands of satellites in orbit, wide-field images are increasingly "polluted" by authorized satellite streaks. This complicates the detection of uncatalogued debris. Future algorithms must incorporate "predictive masking," using the known TLEs of Starlink satellites to mask out their streaks in software, allowing the system to hunt for the faint, uncatalogued debris hiding in the noise.1
7.3 Policy and STM¶
The availability of high-quality, independent tracking data from non-state actors (universities, NGOs, companies) changes the geopolitical landscape of space. It reduces reliance on the military-controlled US Space Surveillance Network (SSN) or Russian systems.3 This transparency is vital for Space Traffic Management (STM). If a commercial operator can independently verify a collision risk using open-source data from a network like Luciole, they can maneuver with greater confidence. This "democratization" of data is essential for the sustainable governance of the space commons.34
8. Conclusion¶
The adaptation of commercial security camera sensors for Space Situational Awareness represents a triumph of ingenuity and distributed scaling. By leveraging the physical sensitivity of Sony STARVIS sensors and the computational power of modern edge devices, projects like the Global Meteor Network and Project Luciole have demonstrated that high-cadence, wide-field surveillance of LEO is possible at a cost orders of magnitude lower than traditional aerospace solutions.
These systems do not replace high-power radars; they cannot track 2cm debris, nor can they see through clouds. However, they fill a critical gap: the ability to maintain persistent, high-update-rate custody of the growing population of active satellites and decimeter-class debris. They provide the "eyes" that allow for characterization (light curves) and maneuver detection that radars often miss. As event-based sensors mature and deep learning algorithms become more efficient, these distributed optical "fences" will become the ubiquitous CCTV network of Earth's orbit, ensuring that the orbital environment remains transparent and safe for future generations.
9. Appendix: Technical Specifications & Comparisons¶
| Feature | Sony IMX291 (STARVIS) | Sony IMX307 (STARVIS) | Sony IMX462 (STARVIS 2) | Sony IMX455 (Scientific) | sCMOS (Typical Scientific) |
| Optical Format | 1/2.8" | 1/2.8" | 1/2.8" | 35mm (Full Frame) | 1.2" - 2" |
| Resolution | 2.1 MP (1936x1096) | 2.1 MP (1920x1080) | 2.1 MP (1936x1096) | 61 MP (9576x6388) | 4-10 MP |
| Pixel Size | 2.9 $\mu$m | 2.9 $\mu$m | 2.9 $\mu$m | 3.76 $\mu$m | 6.5 - 11 $\mu$m |
| Peak QE | ~80% (Visible) | ~80% (Visible) | >80% (High NIR sensitivity) | >85% | 80-95% |
| Read Noise | ~1.0 e- (HCG mode) | ~1.0 e- (HCG mode) | 0.5 - 1.0 e- (HCG) | 1.5 - 3.0 e- | 1.0 - 1.5 e- |
| Shutter Type | Rolling | Rolling | Rolling | Rolling | Global / Rolling |
| Frame Rate | up to 120 fps | up to 60 fps | up to 120 fps | ~3-10 fps (full res) | 50-100 fps |
| Cost (Sensor) | <$20 | <$15 | <$25 | ~$3,000+ (Camera) | >$10,000 |
| SSA Use Case | Meteor/Sat Tracking (GMN) | Cost-effective GMN nodes | High NIR / Debris | Wide-field Surveys | High-precision Science |
Table 2: Comparative analysis of COTS security sensors vs. Scientific sensors. Data derived from.11
Works cited¶
-
Centre for the Protection of the Dark and Quiet Sky from Satellite Constellation Interference institutional meet, accessed on December 12, 2025, https://astronomy2024.org/wp-content/uploads/2024/07/Abstracts-book_July30.pdf
-
Technology Demonstration of Space Situational Awareness (SSA ..., accessed on December 12, 2025, https://www.mdpi.com/2072-4292/16/5/749
-
Tracking and Cataloging Orbital Debris - National Academies of Sciences, Engineering, and Medicine, accessed on December 12, 2025, https://www.nationalacademies.org/read/4765/chapter/5
-
Passive optical detection of submillimeter and millimeter size space debris in low Earth orbit | Request PDF - ResearchGate, accessed on December 12, 2025, https://www.researchgate.net/publication/266148561_Passive_optical_detection_of_submillimeter_and_millimeter_size_space_debris_in_low_Earth_orbit
-
Detecting, Tracking and Imaging Space Debris, accessed on December 12, 2025, https://www.esa.int/esapub/bulletin/bullet109/chapter16_bul109.pdf
-
Analysis of detection limits in event-based cameras for space situational awareness Vicente Westerhout Aliste Ponti icia Univers - AMOS Conference, accessed on December 12, 2025, https://amostech.com/TechnicalPapers/2023/Poster/Westerhout.pdf
-
CMOS Image Sensors in Surveillance System Applications - PMC - NIH, accessed on December 12, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC7827463/
-
Scientific CMOS (sCMOS) Cameras: The Basics | Teledyne Vision Solutions, accessed on December 12, 2025, https://www.teledynevisionsolutions.com/learn/learning-center/imaging-fundamentals/scientific-cmos-scmos-cameras-the-basics/
-
Sensitivity and Noise of CCD, EMCCD and sCMOS Sensors - Andor - Oxford Instruments, accessed on December 12, 2025, https://andor.oxinst.com/learning/view/article/sensitivity-and-noise-of-ccd-emccd-and-scmos-sensors
-
IROAD FX2 Pro: The Ideal Dashcam for Everyday Drivers, accessed on December 12, 2025, https://iroad.kr/iroad-fx2-pro-the-ideal-dashcam-for-everyday-drivers/
-
IR-CUT 2MPx IMX290 Ultra Low Light camera module for Raspberry Pi - ArduCam B0424, accessed on December 12, 2025, https://botland.store/raspberry-pi-cameras/23584-ir-cut-2mpx-imx290-ultra-low-light-camera-module-for-raspberry-pi-arducam-b0424.html
-
Svbony SV305M Pro Monochrome Planetary Camera USB3.0 - High Sensitivity Telescope Camera - Astronomy Store, accessed on December 12, 2025, https://astronomy.store/en/products/svbony-sv305m-pro-planetary-monochrome-camera-1
-
Comparing the IMX455 (Industry-Grade) and KAI-11002 35mm Format Monochrome Sensors - Blog - Baader Planetarium, accessed on December 12, 2025, https://www.baader-planetarium.com/en/blog/comparing-the-imx455-industry-grade-and-kai-11002-35mm-format-monochrome-sensors/
-
C9) QE Curves for CMOS Imagers - Scientific Imaging, Inc., accessed on December 12, 2025, https://scientificimaging.com/knowledge-base/qe-curves-for-cmos-imagers/
-
2MP/5MP Outdoor Network Camera Smart Video Analytics - Unifore Security, accessed on December 12, 2025, https://www.unifore.net/item/2mp-5mp-outdoor-network-camera-smart-video-analytics.html
-
MStar H.265+ Smart HD Network Cameras MSC313E MSC316DM - Security Camera, accessed on December 12, 2025, https://www.unifore.net/product-highlights/mstar-h-265-smart-hd-network-cameras-msc313e-msc316dm.html
-
Raspberry-Pi - after hours coding - WordPress.com, accessed on December 12, 2025, https://afterhourscoding.wordpress.com/category/raspberry-pi/
-
High-precision photometry with a scientific CMOS camera: I lab testing of the Marana camera | RAS Techniques and Instruments | Oxford Academic, accessed on December 12, 2025, https://academic.oup.com/rasti/article/doi/10.1093/rasti/rzaf049/8297137
-
ZWO ASI290mm USB3.0 Monochrome CMOS Camera with AutoGuider Port, accessed on December 12, 2025, https://www.widescreen-centre.co.uk/zwo-asi290mm-usb30-monochrome-cmos-camera-with-autoguider-port.html
-
A Comparison of EMCCD vs sCMOS Cameras - Andor - Oxford Instruments, accessed on December 12, 2025, https://andor.oxinst.com/learning/view/article/comparing-scmos
-
Parametric modeling and experimental measurement of rolling shutter characteristics for optical camera communication using undersampled modulation, accessed on December 12, 2025, https://opg.optica.org/ao/upcoming_pdf.cfm?id=468009
-
Global Meteor Network – Methodology and first results | Monthly Notices of the Royal Astronomical Society | Oxford Academic, accessed on December 12, 2025, https://academic.oup.com/mnras/article/506/4/5046/6347233
-
Project Luciole: A Wide-Field, High-Cadence Uncued Optical System For Comprehensive Tracking Of - AMOS Conference, accessed on December 12, 2025, https://amostech.com/TechnicalPapers/2024/Poster/Vida.pdf
-
Stratospheric Night Sky Imaging Payload for Space Situational ... - NIH, accessed on December 12, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC10384062/
-
TLRS-3 RETURN TO OPERATIONS - International Laser Ranging Service, accessed on December 12, 2025, https://ilrs.gsfc.nasa.gov/lw15/docs/papers/TLRS-3%20Return%20to%20Operations.pdf
-
The W5RRR Shack - Johnson Space Center Amateur Radio Club, accessed on December 12, 2025, https://www.w5rrr.org/the-w5rrr-shack/
-
Detecting streaks in smart telescopes images with Deep Learning - arXiv, accessed on December 12, 2025, https://arxiv.org/html/2510.17540v1
-
Detecting straight lines · DSSG-2022 Satellite Streaks, accessed on December 12, 2025, https://uwescience.github.io/DSSG2022-Satellite-Streaks/detection/
-
(PDF) Automatic reacquisition of satellite positions by detecting their expected streaks in astronomical images - ResearchGate, accessed on December 12, 2025, https://www.researchgate.net/publication/228961396_Automatic_reacquisition_of_satellite_positions_by_detecting_their_expected_streaks_in_astronomical_images
-
Resource-aware Detection of Satellites Streaks in Deep Sky Images Streams - ERCIM News, accessed on December 12, 2025, https://ercim-news.ercim.eu/en140/special/resource-aware-detection-of-satellites-streaks-in-deep-sky-images-streams
-
Simultaneous radar and video meteors—II: Photometry and ionisation - ResearchGate, accessed on December 12, 2025, https://www.researchgate.net/publication/256830341_Simultaneous_radar_and_video_meteors-II_Photometry_and_ionisation
-
Event-Based Object Detection and Tracking for Space Situational Awareness - IEEE Xplore, accessed on December 12, 2025, https://ieeexplore.ieee.org/iel7/7361/9263090/09142352.pdf
-
A Multi-station Meteor Monitoring (M3) System. II. system upgrade and a pathfinder network, accessed on December 12, 2025, https://arxiv.org/html/2410.08103v1
-
AMOS Conference 2024 PROGRAM, accessed on December 12, 2025, https://amostech.com/wp-content/uploads/2024/09/AMOS_2024_Program.pdf
**