Gravitational Lens Time Delays¶
Tier 1 science — genuine load relief for professional telescopes. This is one of the clearest cases where a distributed amateur network does not supplement professional work — it replaces a specific, well-defined observing task that professional telescopes currently cannot afford to do at the required scale.
The Actual Science¶
When a massive foreground galaxy (or galaxy cluster) lies between Earth and a distant quasar, gravity bends the quasar light into multiple images. You see two, four, or sometimes more copies of the same quasar arranged around the lens. This is a gravitational lens.
The quasar is not static — it varies in brightness over days to months, driven by accretion disk fluctuations. Each image of the quasar travels a different path through curved spacetime to reach us. One path is longer. This means when the quasar flickers, that flicker arrives at image A before it arrives at image B. The time difference between those arrivals is the time delay. [[Gravitational Lens Time Delays]] The time delay encodes the Hubble constant.
The Connection to H0¶
The time delay between images depends on: 1. The geometry of the lens — how the mass is distributed in the foreground galaxy 2. The distances involved — the angular diameter distances between observer, lens, and source 3. The Hubble constant — which sets the absolute scale of the universe
Given a good lens model (from HST imaging and stellar kinematics of the lens galaxy), measuring the time delay gives you a nearly direct measurement of H0, with a different systematic error budget than any other method. This is not a cross-check; it is an independent route.
Why This Matters Right Now¶
There is a genuine crisis in cosmology. The Hubble constant measured from the early universe (Planck CMB: 67.4 ± 0.5 km/s/Mpc) [Source: Planck Collaboration (2020), A&A 641, A6] disagrees with measurements from the late universe (Type Ia supernova distance ladder: 73.0 ± 1.0 km/s/Mpc) [Source: Riess et al. (2022), ApJL 934, L7]. This is a ~5 sigma tension that has survived a decade of scrutiny. If it is real, it requires new physics beyond the standard cosmological model.
Time-delay cosmography sits between these two approaches. It is geometrically clean, physically well-understood, and systematically independent. More systems measured with longer baselines = tighter H0 constraints.
Reference: Wong et al. 2020 (H0LiCOW XIII, MNRAS 498, 1420) combined six lensed quasar systems to get H0 = 73.3 (+1.7/-1.8) km/s/Mpc, consistent with the late-universe tension. Birrer et al. 2020 (TDCOSMO IV, A&A 643, A165) introduced a hierarchical analysis of lens mass profiles and got H0 = 74.5 (+5.6/-6.1) km/s/Mpc. Both analyses need more systems and longer baselines.
What TDCOSMO / H0LiCOW Actually Do¶
H0LiCOW (H0 Lenses in COSMOGRAIL's Wellspring) was the original programme combining time-delay measurements from COSMOGRAIL with HST lens modelling and Keck/VLT stellar kinematics. Six systems, analysed by different teams, combined for the Wong et al. 2020 result.
TDCOSMO (Time-Delay Cosmography) is the current collaboration, merging H0LiCOW, COSMOGRAIL, and the SHARP programme. It explicitly identified the largest remaining systematic: the mass-sheet degeneracy in lens models. The Birrer et al. 2020 reanalysis found that if lens galaxies have slightly more complex mass profiles than assumed, H0 shifts down toward the Planck value. This can only be broken with more systems and better external constraints.
COSMOGRAIL (COSmological MOnitoring of GRAvItational Lenses) is the photometric monitoring programme that actually measures the time delays. It uses 1–2m class telescopes at multiple sites. The bottleneck is not image quality — it is continuous, years-long monitoring. COSMOGRAIL has been running since 2004 and has published delays for around 20 systems. There are more than 200 known quad-lens systems and thousands of doubles. The monitoring capacity does not exist to cover them all.
This is the gap.
Why a Single Professional Telescope Cannot Do This¶
The time delays for typical lensed quasars range from a few days to a few hundred days. Measuring them to 3% precision (needed for a 1.5% H0 constraint from a single system) requires:
- Monitoring the target every 1–3 days for at least one full season, ideally 3–5 years
- Photometric precision of 2–5 mmag per epoch
- A light curve long enough to sample multiple quasar variability events (not just one flicker)
A 2m telescope getting even generous allocations might have 30–60 nights per year on a given target. That is a sparse, aliased sampling. COSMOGRAIL's best light curves come from telescopes that can dedicate observations every clear night across multiple sites.
Even the COSMOGRAIL programme, using dedicated smaller telescopes (the 1.2m Euler telescope at La Silla, the 2.2m MPG telescope, the 1.5m ART), cannot scale to cover hundreds of systems at once. Professional time allocation committees will not give years of nightly time on a 2m scope to one monitoring programme for a single object.
Does this take real load off 2m scopes? Yes, directly. COSMOGRAIL already uses 1–2m class telescopes for this work. An amateur network providing the photometric monitoring baseline for even a subset of systems reduces the time demand on those facilities — or frees them to observe systems currently not being monitored at all. The professional time is then reserved for the parts amateurs cannot do: HST imaging of the lens for mass modelling, and VLT/Keck spectroscopy for lens galaxy kinematics.
What a Distributed Network Contributes That Is Unique¶
Longitudinal Coverage¶
Quasar variability events — the "flickers" that produce measurable delays — happen on timescales of days. If a lens is observable only 6 hours per night from any single site, you miss roughly 75% of the curve. A Europe + Americas + Asia/Pacific chain ensures that every night, somewhere on Earth, the target is above the horizon and being observed.
Seasonal Gap Bridging¶
Single-site monitoring has seasonal gaps when the target is behind the sun. With a distributed network, gaps can be reduced (for many targets, different sites have different sun-proximity seasons depending on RA). Even partial coverage during difficult seasons helps anchor the light curve model.
Scale: Many Systems Simultaneously¶
The critical contribution is not monitoring one system better. It is monitoring 20–50 systems simultaneously, which no professional programme can do. Time-delay cosmography improves as 1/sqrt(N systems). [Source: Liao et al. (2019), ApJ 886, L23 — statistical constraints on H0 as a function of N lens systems] TDCOSMO's current 40+ system goal requires a step-change in monitoring capacity. A distributed amateur network is that step change.
[NOVEL] Framing a distributed amateur photometric network as a step-change in TDCOSMO's monitoring capacity — simultaneously covering 20–50 lensed quasar systems that no professional programme can afford to monitor at the required nightly cadence — is original to OpenAstro. No existing citizen-science project has been proposed in this role.
Redundancy Against Weather¶
The time delay measurement is sensitive to gaps — a missed flicker event early in a season can bias the delay estimate. Redundancy across multiple sites at the same longitude means cloud cover at one site does not create a gap in the light curve.
Hardware Requirements¶
| Parameter | Minimum | Good | Notes |
|---|---|---|---|
| Aperture | 25 cm | 40–50 cm | Lensed quasars are typically V = 15–18; most targets need 35cm+ |
| Photometric precision | 5 mmag | 2 mmag | Need to detect quasar variations of 0.05–0.2 mag |
| Filter | R-band or V-band | R preferred | Quasars are blue; R reduces AGN variability noise and atmospheric dispersion |
| Cadence | Every 3 days | Every 1–2 days | Nyquist for the shortest delays (~7 days); every day for best constraints |
| FOV | 5 arcmin | 10 arcmin | Need comparison stars in field; lens separation is 1–5 arcsec |
| Image scale / seeing | <3 arcsec/pixel | <1.5 arcsec/pixel | Must resolve the multiple images from each other |
| Timing precision | NTP (seconds) | NTP | Unlike occultations, timing precision is not the bottleneck here |
The Seeing Problem¶
This is the critical constraint. Lensed quasar images are separated by 1–10 arcsec, with most doubles at 1–3 arcsec. You must resolve the images to measure each one's flux separately. This requires: - Seeing better than ~2 arcsec - A pixel scale of 0.5–1 arcsec/pixel - PSF-fitting photometry (not aperture photometry) — the images blend into each other
From sites with typical amateur seeing of 2–4 arcsec, blended aperture photometry will not work. The workaround is either: (a) selecting the handful of wide-separation lenses (separation >3 arcsec, such as RXJ0911+0551 at 3.1 arcsec or HE0435-1223 at 2.4 arcsec), or (b) implementing deblending via PSF decomposition at the pipeline level.
This is a hard technical requirement. Not every amateur site qualifies. But those at good sites with careful PSF fitting can contribute real data.
Target Systems¶
Well-monitored systems where additional baseline directly contributes to published analyses:
| System | Sep (arcsec) | V mag | Delay (days) | Status |
|---|---|---|---|---|
| HE 0435-1223 | 2.4 | 17.0 | ~14 | COSMOGRAIL measured; more baseline useful |
| RX J1131-1231 | 3.4 | 15.5 | ~91 | Shortest delay known; best-monitored |
| WFI 2033-4723 | 2.5 | 16.5 | ~36 | COSMOGRAIL measured 2019 |
| PG 1115+080 | 2.3 | 15.8 | ~12 | Classic system; multiple measurements |
| UM 673 | 2.2 | 16.7 | ~90 | Fewer measurements; useful gap |
CASTLES catalog (Harvard-Smithsonian CfA) maintains the full list. The GAIA GraL catalogue has ~350 new candidates from Gaia DR2 that have never been monitored.
The Pipeline Challenge¶
Photometric monitoring is straightforward in principle. The analysis is not.
To extract a time delay from a multi-year light curve: 1. PSF-fit each epoch to get fluxes for each image separately 2. Correct for microlensing — slow brightness changes in one image caused by individual stars in the lens galaxy moving across the quasar's emission region. This is not the time delay; it is a systematic 3. Apply a time-series analysis: PyCS (Python Curve Shifting, developed by COSMOGRAIL) is the standard. It implements regression difference, free-knot splines, and dispersion methods 4. Estimate uncertainties via Monte Carlo resampling of the light curve
PyCS is public and documented. The OpenAstro pipeline would collect photometry from distributed sites, run PSF decomposition, assemble multi-site light curves with systematic offsets removed, and feed into PyCS. This is an achievable pipeline.
Reference: Tewes et al. 2013 (A&A 556, A22) — PyCS paper, describes the analysis method in detail. Millon et al. 2020 (A&A 640, A105) — COSMOGRAIL XVII, most recent methodology paper. [Source: PyCS3 code: Millon et al. (2020), JOSS]
Publication Pathway¶
This is the clearest route to co-authored publications in this vault.
-
COSMOGRAIL collaboration model: COSMOGRAIL explicitly recruits additional monitoring sites. They have published light curves combining data from multiple small telescopes. A formal arrangement where OpenAstro sites submit data to COSMOGRAIL's reduction pipeline would produce co-authorships on COSMOGRAIL papers.
-
Independent monitoring: Build baselines on systems currently not being monitored (from the GAIA GraL catalogue). Publish light curves independently. These are publishable even without a time delay measurement — light curves alone are useful for the community.
-
Time delay publication: 3–5 year baseline on a system with good variability → time delay measurement → paper with OpenAstro contributors as co-authors alongside TDCOSMO members.
Target journal: Astronomy & Astrophysics (European Astronomical Society, where COSMOGRAIL publishes). Also Monthly Notices of the Royal Astronomical Society.
Difficulty Assessment¶
High difficulty, high reward.
The technical bar is higher than most amateur science: - PSF decomposition at 2 arcsec separation is non-trivial - Seeing requirements filter out a significant fraction of potential sites - The photometric precision required (2–5 mmag) demands careful calibration and comparison star selection - The time baseline required (years) demands sustained network operation
This is not a casual project. It requires: - A subset of well-sited nodes with consistent sub-2 arcsec seeing - Careful PSF-fitting pipeline (not just aperture photometry) - A long-term commitment — results come after years, not months - Integration with COSMOGRAIL or equivalent for analysis
The reward: direct, measurable contribution to the H0 tension problem — one of the most important open questions in cosmology. The data produced is irreplaceable and cannot be replicated by any single telescope.
Direct Load Relief on 2m Scopes¶
Yes, directly and substantially.
COSMOGRAIL uses the 1.2m Euler telescope at La Silla as its primary monitoring instrument. Every lensed quasar system that can be monitored by the OpenAstro network is one less system competing for Euler time — or one more system that can be added to the monitoring list without consuming additional Euler allocations.
The professional work that cannot be replaced: HST imaging of lens structure (optical depth mapping), and 8–10m spectroscopy of lens galaxy kinematics (for breaking the mass-sheet degeneracy). These are the bottlenecks that actually need big telescopes. The photometric monitoring that constitutes the bulk of the observing programme? That can be transferred to a distributed network.
Connection to OpenAstro Pipeline¶
- Target ingestion: Pull current COSMOGRAIL target list + GAIA GraL candidates; flag by RA accessibility and estimated separation [Source: Ducourant et al. (2018), A&A 618, A56 — Gaia GraL lens catalogue]
- Site qualification: Filter network nodes by median seeing and pixel scale; only qualified nodes assigned to this programme
- Nightly scheduling: Assign to best-available site per longitude band; record which sites observed each night
- Data reduction: PSF decomposition via automated pipeline (based on COSMOGRAIL's MCS deconvolution or STARRED code) [Source: Cantale et al. (2016), A&A 589, A81 — COSMOGRAIL MCS deconvolution pipeline]
- Light curve assembly: Combine across sites with flux zero-point matching
- Analysis: Feed into PyCS time-delay extraction after sufficient baseline [Source: Tewes et al. (2013), A&A 556, A22 — PyCS]
- Reporting: Flag systems reaching publishable baseline; initiate collaboration with TDCOSMO
[NOVEL] The node-qualification filter — screening OpenAstro network members by median seeing and pixel scale before assignment to gravitational lens monitoring — and the per-longitude-band scheduling for longitudinal coverage are original operational designs not present in any existing amateur network or COSMOGRAIL protocol.