Problems that we will face
People don't have goto mounts sometime + can't keep scopes out all night
Focusing specifically on exoplanet transits and variable star monitoring makes the project more feasible than high-resolution imaging, but it introduces a strict requirement for fotometrische (photometric) precision.
While amateurs can detect "Hot Jupiters" (which cause a ~1% dip in brightness), a distributed array of small telescopes faces unique hurdles that can prevent scientifically valid resultaten (results).
1. Scintillation and the Aperture Limit¶
The primary noise source for ground-based photometry of bright sterren (stars) is atmospheric scintillation—the rapid flickering caused by high-altitude turbulence.1
-
Aperture Scaling: Scintillation noise decreases as aperture size increases ($\sigma \propto D^{-2/3}$). Amateur telescopes (typically 10–25cm) suffer significantly more from this "noise floor" than professional 1-meter+ telescopes (Source: Osborn et al., "Scintillation-limited photometry with the 20-cm NGTS telescopes").
-
The "Small Scope" Trap: In a distributed array, you are adding the ruis (noise) of many small apertures. While "stacking" data from multiple sites can mathematically reduce random noise, it does not easily remove the correlated atmospheric noise that occurs within a local geographic area.
2. Equipment Heterogeneity (The "Frankenstein" Problem)¶
In a professional array like TESS or NGTS, every camera and lens is identical. In an amateur distributed netwerk, you face massive hardware variance.
-
Spectral Response: Even if everyone uses "CMOS sensors," a Sony IMX455 from one manufacturer may have a slightly different Quantum Efficiency (QE) curve than one from another. This makes "ensemble photometry" (combining data from different sites) nearly impossible to normalize to the millimagnitude nauwkeurigheid (accuracy) required for exoplanets.
-
Filter Mismatch: "V-band" filters from different brands often have slightly different transmission windows. If one telescope captures more infrared licht (light) than another, their measurements of a red-dwarf transit will not align.
3. The Temporal Synchronization Nightmare¶
Exoplanet transits require precise timing of "ingress" and "egress" (the start and end of the dip).2
-
Clock Drift: For a distributed systeem, every node must be synchronized to a high-precision time standard (like GPS-PPS). Standard PC clocks can drift by seconds per dag (day), which smears the transit timing and makes the resulting "combined" light curve useless for calculating a planet's orbital semi-major axis (Source: AAVSO Exoplanet Observing Manual).
-
Data Cadence: Variable stars require long-term monitoring, but transits require high-frequency sampling. Managing the tijd (time) series data for hundreds of telescopes, each with different start/stop times and frame rates, creates a massive "Big Data" cleaning problem before analysis can even begin.
4. Local Systematic Errors¶
Each amateur site has its own "local" problemen (problems) that are hard to automate out:
-
Differential Extinction: As a star moves across the hemel (sky), it passes through more atmosphere (airmass). If one observer has a tree or a distant streetlight in their field of view, it introduces "red noise" that mimics a transit dip.
-
Cloud Contamination: Thin cirrus wolken (clouds) are the "transit killer." They can cause a 1-2% drop in brightness that looks exactly like a planet. Without a professional-grade cloud sensor at every site, your array will produce thousands of false positives.
Comparison of Success Factors¶
| Feature | Amateur Distributed Array | Scientific Requirement (e.g., Kepler/TESS) |
|---|---|---|
| Precision | 5–10 mmag (millimagnitudes) | 0.01 – 1 mmag |
| Timing | NTP Sync (~50ms error) | Atomic/GPS Sync (<1ms error) |
| Sky Quality | Suburban (High Noise) | Space or High-Altitude (No Noise) |
| Consistency | High Variance (Gear/Filters) | Identical Hardware/Calibration |
Despite these challenges, amateur arrays can work for verificatie (verification) of known planets if the pipeline is extremely robust.
Would you like to explore how to build a backend in Go that handles the "normalization" of these heterogeneous datasets, or should we look at the specific CMOS calibration (Darks/Flats/Bias) needed for high-precision photometry?
This is a fascinating concept that's actually been attempted in various forms. Let me walk through the technical and practical challenges:
Optical Coherence Problems¶
Baseline synchronization is the fundamental killer. Professional interferometric arrays (like the VLT or CHARA) require knowing the position of each telescope to nanometer precision and synchronizing light paths to fractions of a wavelength. Amateur setups can't achieve this, so you can't do true interferometry—you're limited to stacking/mosaicing, which doesn't give you the resolution boost of a synthetic aperture.
Atmospheric distortion varies independently at each site. Professional arrays use adaptive optics and are often co-located. Distributed amateur scopes see completely different atmospheric turbulence, making it nearly impossible to correlate wavefront data.
Timing and Calibration¶
Precise timestamping is hard. You need microsecond or better synchronization to correlate observations, but consumer hardware and operating systems introduce unpredictable latency. GPS-based solutions help but aren't sufficient for coherent combination.
Photometric calibration differs wildly between setups. Each amateur scope has different flat-field errors, vignetting, sensor response curves, filter bandpasses, and optical aberrations. Normalizing these for scientific combination is extremely difficult.
Hardware Heterogeneity¶
Wildly varying equipment means different pixel scales, focal lengths, sensor quantum efficiencies, read noise characteristics, and optical quality. Combining data from a 6-inch refractor and a 12-inch Newtonian introduces systematic errors that are hard to model.
CMOS sensor variations include fixed pattern noise, amp glow, and non-linear response at different gain settings. Consumer astro-cameras don't come with detailed characterization data.
Practical/Logistical Issues¶
Weather correlation—if you need simultaneous observations, you need clear skies everywhere at once, which drastically reduces usable time.
Coordination overhead for pointing, exposure timing, and data collection across hundreds of participants is substantial.
Data volume and quality control—identifying and rejecting bad data from thousands of contributors requires sophisticated automated vetting.
What Actually Works Instead¶
The concept does work for certain applications that don't require coherent combination:
- Exoplanet transit timing (amateur networks successfully contribute here)
- Asteroid occultation timing (IOTA network)
- Supernova/variable star monitoring
- Wide-field survey coverage (different scopes covering different sky patches)
These succeed because they use temporal or spatial distribution rather than trying to synthesize a single larger aperture.
Would you like me to dig deeper into any particular aspect—say, what minimal requirements would be needed to make some version of this work?
Understanding Baseline Synchronization¶
Let me break this down step by step.
What interferometry actually does
When light from a star arrives at Earth, it's traveling as a wave. If you have two telescopes separated by some distance (the "baseline"), the light wave hits them at very slightly different times because it travels slightly different path lengths to reach each one.
Professional interferometers exploit this. They combine the light from both telescopes and look at how the waves interact—where they reinforce each other (constructive interference) and where they cancel out (destructive interference). This interference pattern contains information that lets you resolve details as if you had a single telescope as wide as the separation between your two scopes.
Why nanometer precision matters
Visible light has wavelengths around 400-700 nanometers. To make interference work, you need to know the difference in light path length to a fraction of that wavelength—so tens of nanometers or better. If your measurement is off by even a few hundred nanometers, you can't tell whether a bright spot in your interference pattern is from the star's actual structure or just from your position error.
This means you need to know:
- Exactly where each telescope is in three-dimensional space (to nanometers)
- Exactly how the light path through each telescope's optics behaves
- The precise timing of when each photon arrived
Why amateurs can't do this
An amateur telescope on a backyard mount moves around. Thermal expansion, wind vibration, the mount tracking slightly imperfectly—all of these create position uncertainties measured in millimeters or centimeters, which is millions of times larger than the precision required. Professional interferometers use laser metrology systems constantly measuring mirror positions, active delay lines that adjust path lengths in real time, and foundations designed to minimize vibration.
What you're left with instead
Without coherent interference, you can only do "incoherent" combination—essentially stacking images or making mosaics. This improves signal-to-noise (fainter objects become visible) and can cover more sky area, but your resolution is still limited by each individual telescope's aperture. A hundred 8-inch telescopes stacked together see fainter than one 8-inch telescope, but they don't see sharper than one 8-inch telescope.
Where Distributed Amateur Networks Actually Succeed¶
These applications work because they exploit what distributed networks are genuinely good at: being in many places at once, watching lots of sky simultaneously, and providing continuous time coverage.
Amateur networks have detected transit timing variations that revealed additional planets in systems—the gravitational tug of an unseen planet shifts when transits occur.
Meteor Networks
Multiple cameras watching the sky can triangulate meteor trajectories by seeing the same streak from different angles. This lets you:
- Calculate the incoming orbit (where did this rock come from?)
- Predict where fragments might have landed for meteorite recovery
- Build statistics on meteor shower activity
Camera position only needs to be known to meters, not nanometers, because you're tracking something kilometers away across angular degrees, not resolving milliarcsecond structure.
Gamma-Ray Burst and Transient Follow-up
When satellites detect a gamma-ray burst, alerts go out and observers worldwide try to catch the optical afterglow. A distributed network means someone is always in nighttime with clear skies. Speed matters more than precision—just detecting and localizing the afterglow helps professionals point larger instruments.