Skip to content

Heterogeneous Image Co-addition: Mathematical Framework

Status: Specification — implementation reference for the stacking module Relates to: [[SNR and Stacking Theory]], [[Flux Calibration Mathematics]], [[Heterogenous Stacking]] Last derived: 2026-03-21


1. Problem Formalisation

1.1 Image Formation Model

Let the true sky brightness distribution be $S(\mathbf{x})$, where $\mathbf{x} = (x, y)$ denotes angular position on the sky in arcseconds relative to some reference point.

Telescope $k$ (with $k = 1, \ldots, K$) produces a discrete image $I_k[m, n]$ according to the following generative model:

$$I_k[m, n] = \left[ \left( S \ast P_k \right)(\mathbf{x}) \right]_{\mathbf{x} = \mathbf{x}_0^{(k)} + \delta_k (m, n)} \cdot g_k + n_k[m, n]$$

where:

  • $P_k(\mathbf{x})$ is the point spread function of telescope $k$, normalised so that $\int P_k(\mathbf{x}) \, d^2\mathbf{x} = 1$
  • $\ast$ denotes 2D spatial convolution
  • $\delta_k$ is the pixel scale of telescope $k$ in arcsec/pixel (the sampling interval)
  • $\mathbf{x}_0^{(k)}$ is the WCS-defined origin of telescope $k$'s pixel grid
  • $g_k$ is the gain-exposure product: $g_k = (\pi D_k^2 / 4) \cdot \eta_k \cdot t_k / G_k$, converting sky brightness to ADU, with aperture $D_k$, throughput $\eta_k$, exposure time $t_k$, and detector gain $G_k$ (e$^-$/ADU)
  • $n_k[m, n]$ is the noise realisation at pixel $(m, n)$

In compact notation, suppressing pixel indices:

$$I_k = g_k \cdot (S \ast P_k) !\downarrow_{\delta_k} + n_k$$

where $\downarrow_{\delta_k}$ denotes sampling on the grid with spacing $\delta_k$.

1.2 Noise Model

The noise $n_k[m, n]$ is the sum of several components:

Photon (Poisson) noise from source + sky: $$\sigma_{\text{phot},k}^2[m, n] = G_k^{-1} \cdot I_k[m, n] + n_{\text{pix}} \cdot B_k \cdot t_k / G_k$$

where $B_k$ is the sky background rate in e$^-$ s$^{-1}$ pixel$^{-1}$ and $n_\text{pix}$ is the number of pixels in the measurement aperture.

Read noise (Gaussian): $$\sigma_{\text{read},k}^2 = R_{e,k}^2 / G_k^2$$

per pixel, where $R_{e,k}$ is the read noise in electrons (typically 2--10 e$^-$ for modern CMOS).

Scintillation noise (multiplicative, correlated across the frame): $$\sigma_{\text{scint},k} = \epsilon_{\text{scint},k} \cdot I_k[m, n]$$

where $\epsilon_{\text{scint},k} = C_0 \cdot D_k^{-2/3}$ is the fractional scintillation index (see SNR and Stacking Theory, Section 2).

Flat-field residual noise (spatially correlated): $$\sigma_{\text{flat},k}[m, n] = f_k \cdot I_k[m, n]$$

where $f_k \sim 0.001$--$0.005$ is the fractional flat-fielding error (after correction). This introduces spatially correlated noise with a correlation scale set by the flat-field illumination pattern, typically $\sim$10--100 pixels.

Total per-pixel variance (assuming independence of noise sources): $$\text{Var}[n_k[m, n]] = \sigma_{\text{phot},k}^2 + \sigma_{\text{read},k}^2 + \sigma_{\text{scint},k}^2 + \sigma_{\text{flat},k}^2$$

For bright point sources (the primary OpenAstro science case), the dominant terms are photon noise and scintillation, as established in the SNR theory document.

1.3 Definition of "Optimal" Co-addition

The co-addition problem has two distinct optimality criteria:

(a) Maximum SNR on a point source: The co-added image $C(\mathbf{x})$ should maximise the ratio of the peak response to a point source divided by the noise at that position. This is the matched-filter problem, and the optimal solution weights each image by its individual SNR contribution.

(b) Minimum effective PSF FWHM for extended source structure: The co-added image should preserve the highest angular resolution available in the input data. This is incompatible with (a) when images have different PSF widths -- a narrow-PSF image has lower SNR per pixel (for a given source brightness) than a wide-PSF image of the same source through a larger aperture.

These two goals are generally in tension. The framework below derives the optimal solution for each case and specifies when a hybrid approach (separate stacks per resolution class) is preferable.


2. PSF Homogenisation

2.1 The Standard Approach: Convolution to a Common PSF

Before co-addition, all images must share a common effective PSF $P_\text{target}(\mathbf{x})$. For image $k$ with PSF $P_k$, define the homogenisation kernel $K_k$ such that:

$$P_k \ast K_k = P_\text{target}$$

In Fourier space:

$$\tilde{K}k(\mathbf{u}) = \frac{\tilde{P}\text{target}(\mathbf{u})}{\tilde{P}_k(\mathbf{u})}$$

This division is ill-conditioned where $\tilde{P}_k(\mathbf{u}) \to 0$ (i.e., at spatial frequencies beyond the resolution limit of image $k$). In practice, the kernel is regularised:

$$\tilde{K}k(\mathbf{u}) = \frac{\tilde{P}\text{target}(\mathbf{u}) \cdot \tilde{P}_k^*(\mathbf{u})}{|\tilde{P}_k(\mathbf{u})|^2 + \alpha_k}$$

where $\alpha_k$ is a regularisation parameter (Wiener filter form) and $^*$ denotes complex conjugation. Setting $\alpha_k = 0$ recovers exact deconvolution; $\alpha_k > 0$ suppresses noise amplification at high spatial frequencies.

The standard choice for $P_\text{target}$ is:

$$\text{FWHM}(P_\text{target}) = \max_k \text{FWHM}(P_k) + \Delta$$

where $\Delta$ is a margin (typically 10--20% of the maximum FWHM) ensuring the homogenisation kernel $K_k$ is well-behaved for all $k$. The cost is resolution loss: all images are degraded to the worst seeing.

2.2 Optimal Target PSF via Fisher Information

The Fisher information about the position of a point source in an image with PSF $P$ and per-pixel noise variance $\sigma^2$ is:

$$\mathcal{I}_\text{pos}(P) = \frac{F^2}{\sigma^2} \int |\nabla P(\mathbf{x})|^2 \, d^2\mathbf{x}$$

where $F$ is the total source flux. For a Gaussian PSF with width parameter $\sigma_P$ (FWHM $= 2\sqrt{2 \ln 2} \, \sigma_P$):

$$\int |\nabla P|^2 \, d^2\mathbf{x} = \frac{1}{4\pi \sigma_P^4}$$

so $\mathcal{I}_\text{pos} \propto 1/\sigma_P^4$ (narrower PSF = much more positional information).

After homogenisation to $P_\text{target}$, image $k$ contributes Fisher information:

$$\mathcal{I}k = \frac{F_k^2}{\sigma_k^2} \int |\nabla P\text{target}(\mathbf{x})|^2 \, d^2\mathbf{x}$$

The total Fisher information in the co-add is:

$$\mathcal{I}\text{total} = \left(\sum_k \frac{F_k^2}{\sigma_k^2}\right) \int |\nabla P\text{target}|^2 \, d^2\mathbf{x}$$

To maximise $\mathcal{I}\text{total}$ over the choice of $P\text{target}$, we want $P_\text{target}$ as narrow as possible. But $P_\text{target}$ must satisfy the realisability constraint: $\tilde{P}\text{target}(\mathbf{u}) = 0$ wherever every $\tilde{P}_k(\mathbf{u}) = 0$, i.e., $P\text{target}$ cannot contain spatial frequency information that no input image possesses.

In practice, define the maximum common bandwidth:

$$u_\text{max} = \min_k u_{\text{cutoff},k}$$

where $u_{\text{cutoff},k}$ is the effective spatial frequency cutoff of image $k$ (set by PSF, pixel scale, and noise). The Fisher-optimal $P_\text{target}$ is then the narrowest PSF whose Fourier transform is supported only on $|\mathbf{u}| \leq u_\text{max}$.

For Gaussian PSFs, $u_{\text{cutoff},k} \approx 1 / (\pi \sigma_{P_k})$ at the $1/e$ level, giving:

$$\sigma_{P_\text{target}}^\text{(Fisher)} = \max_k \sigma_{P_k}$$

This recovers the standard prescription: convolve to the broadest PSF. However, if the weight $F_k^2 / \sigma_k^2$ of the worst-seeing image is small compared to the total, excluding that image and adopting a narrower $P_\text{target}$ increases total Fisher information. This motivates the multi-stack approach below.

2.3 Multi-Stack Approach: Resolution Classes

Partition the $K$ images into $L$ resolution classes $\mathcal{R}1, \ldots, \mathcal{R}_L$ with class boundaries at FWHM thresholds $\theta_1 < \theta_2 < \cdots < \theta_L$. Each class $\ell$ is homogenised to $P$ with FWHM = $\theta_\ell + \Delta$.},\ell

The multi-stack approach is preferable to universal homogenisation when the information loss from degrading the best images outweighs the SNR gain from including the worst images. Quantitatively, define the SNR-weighted resolution loss:

$$\mathcal{L}\text{universal} = \frac{\sum_k w_k \cdot \theta\text{worst}^2}{\sum_k w_k} \quad \text{vs.} \quad \mathcal{L}\text{multi} = \frac{\sum\ell \left(\sum_{k \in \mathcal{R}\ell} w_k\right) \theta\ell^2}{\sum_k w_k}$$

where $w_k = F_k^2 / \sigma_k^2$ is the SNR weight. Multi-stacking is better when $\mathcal{L}\text{multi} < \mathcal{L}\text{universal}$.

Decision criterion: Split into two stacks if the SNR contribution of images with FWHM $> \theta_\text{split}$ is less than half the total. Formally, split at $\theta_\text{split}$ if:

$$\frac{\sum_{k : \text{FWHM}k > \theta\text{split}} w_k}{\sum_k w_k} < 0.5$$

2.4 OpenAstro Numerical Example

For the OpenAstro fleet (PSF range 1.5"--4", pixel scale 0.5"--3"/px):

Universal homogenisation to 4": All images convolved to FWHM = 4.4" (including 10% margin). A source at 1.5" seeing is degraded by a factor $(4.4/1.5)^2 = 8.6 \times$ in positional Fisher information.

Two-stack approach: - Stack A: images with FWHM $\leq$ 2.5", homogenised to 2.75". Typical membership: 8"/10"-class telescopes with good autoguiding at dark sites. - Stack B: all images, homogenised to 4.4". Includes the portable 80mm scopes with 3--4" PSFs.

Resolution loss comparison for a typical night with 5 images at 1.8" and 5 images at 3.5", all with equal SNR weights:

  • Universal: effective resolution = 4.4"
  • Two-stack: Stack A = 2.75" (5 images), Stack B = 4.4" (10 images)

Stack A preserves $\sim 2.6\times$ more positional information per image than the universal stack. For science requiring both detection depth (use Stack B) and morphological analysis (use Stack A), the two-stack approach is strictly superior.


3. Resampling to a Common Grid

3.1 The Resampling Problem

After PSF homogenisation, each image $k$ lives on its native pixel grid with spacing $\delta_k$. Before co-addition, all images must be resampled onto a common output grid with pixel scale $\delta_\text{out}$.

The choice of $\delta_\text{out}$ is governed by the Nyquist criterion: to avoid aliasing of the homogenised PSF $P_\text{target}$:

$$\delta_\text{out} \leq \frac{\text{FWHM}(P_\text{target})}{2}$$

For OpenAstro with $P_\text{target}$ FWHM = 2.75": $\delta_\text{out} \leq 1.375$"/px. A practical choice is $\delta_\text{out} = 1.0$"/px (comfortable Nyquist margin).

3.2 Resampling Kernel Derivation

The resampling operation maps the native-grid image $I_k[\mathbf{m}]$ to the output-grid image $I_k^\text{res}[\mathbf{p}]$:

$$I_k^\text{res}[\mathbf{p}] = \sum_{\mathbf{m}} I_k[\mathbf{m}] \cdot R!\left(\frac{\mathbf{x}(\mathbf{p}) - \mathbf{x}_k(\mathbf{m})}{\delta_k}\right)$$

where $R(\mathbf{u})$ is the resampling (interpolation) kernel and $\mathbf{x}(\cdot)$ maps pixel indices to sky coordinates via the WCS.

The ideal resampling kernel is the sinc function (perfect band-limited interpolation):

$$R_\text{ideal}(u) = \text{sinc}(u) = \frac{\sin(\pi u)}{\pi u}$$

In practice, this is approximated by:

Lanczos-$a$ kernel: $$R_\text{Lanczos}(u) = \begin{cases} \text{sinc}(u) \cdot \text{sinc}(u/a) & |u| < a \ 0 & \text{otherwise} \end{cases}$$

with $a = 3$ as the standard choice (Lanczos-3).

3.3 Aliasing from Under-Sampled Images

An image is under-sampled (aliased) when $\delta_k > \text{FWHM}(P_k) / 2$. For OpenAstro, a telescope with pixel scale $\delta_k = 3$"/px and native PSF FWHM = 2" is severely under-sampled ($\delta_k / \text{FWHM} = 1.5$, where Nyquist requires $\leq 0.5$).

Under-sampled images contain aliased power: high-frequency sky structure is folded into low spatial frequencies. No interpolation kernel can undo this -- the information is irreversibly lost at the sampling stage.

However, after PSF homogenisation to $P_\text{target}$ with FWHM $\gg 2\delta_k$, the situation is different. The homogenisation convolution suppresses the high-frequency content that would alias, making the post-homogenisation image band-limited within the Nyquist frequency of the native grid.

Requirement: PSF homogenisation must be performed on the native pixel grid before resampling to the output grid. The order of operations is critical:

  1. Homogenise: $I_k^\text{hom} = I_k \ast K_k$ (on native grid)
  2. Verify: $\text{FWHM}(P_\text{target}) > 2\delta_k$ (post-homogenisation Nyquist satisfied)
  3. Resample: $I_k^\text{res} = \text{resample}(I_k^\text{hom}, \delta_k \to \delta_\text{out})$

If step 2 is not satisfied (the target PSF is narrower than $2\delta_k$), that image must either be excluded from the stack or explicitly anti-alias filtered before resampling:

$$I_k^\text{AA} = I_k \ast A_k, \quad \tilde{A}_k(\mathbf{u}) = \begin{cases} 1 & |\mathbf{u}| < 1/(2\delta_k) \ 0 & \text{otherwise} \end{cases}$$

This is a low-pass filter at the Nyquist frequency of the native grid, equivalent to forcing the image to be band-limited before resampling.

Is Lanczos interpolation sufficient? For post-homogenisation images satisfying $\text{FWHM}(P_\text{target}) > 2\delta_k$: yes, Lanczos-3 introduces $< 0.1\%$ flux error and negligible ringing. For images that violate Nyquist: no, explicit anti-aliasing is required first.

3.4 Photometric Fidelity

The resampling kernel must conserve total flux. For kernel $R(u)$:

$$\sum_{m} R(u - m) = 1 \quad \forall u$$

(the partition-of-unity property). Lanczos kernels satisfy this to high accuracy ($< 10^{-4}$ fractional error for Lanczos-3). Bilinear interpolation satisfies it exactly. Nearest-neighbour satisfies it exactly but introduces large resampling noise.

Flux conservation test: For each resampled image, compare the total flux in a 5-FWHM-radius aperture around isolated bright stars before and after resampling. Require agreement to $< 0.1\%$ (1 mmag).

3.5 Noise Properties After Resampling

Resampling with kernel $R$ transforms independent per-pixel noise into correlated noise. If the input noise is white with variance $\sigma_k^2$ per pixel, the output noise has:

Variance per output pixel: $$\text{Var}[I_k^\text{res}[\mathbf{p}]] = \sigma_k^2 \cdot \left(\frac{\delta_k}{\delta_\text{out}}\right)^2 \cdot \sum_{\mathbf{m}} R^2!\left(\frac{\mathbf{x}(\mathbf{p}) - \mathbf{x}_k(\mathbf{m})}{\delta_k}\right)$$

For Lanczos-3 with $\delta_k / \delta_\text{out} = 1$: $\sum R^2 \approx 1.02$ (near unity). For $\delta_k / \delta_\text{out} = 3$ (upsampling a coarse image to fine grid): $\sum R^2 \approx 0.11$, and the factor $(\delta_k / \delta_\text{out})^2 = 9$ gives $\text{Var} \approx 0.99 \sigma_k^2$ -- variance per output pixel is approximately conserved, but the noise is now correlated.

Noise correlation function: $$\rho_k(\Delta\mathbf{p}) = \frac{\sum_\mathbf{m} R(\mathbf{u}0 - \mathbf{m}) \cdot R(\mathbf{u}_0 + \Delta\mathbf{p} \cdot \delta\text{out}/\delta_k - \mathbf{m})}{\sum_\mathbf{m} R^2(\mathbf{u}_0 - \mathbf{m})}$$

where $\Delta\mathbf{p}$ is the pixel lag in the output grid. For an upsampled image ($\delta_k > \delta_\text{out}$), the correlation extends over $\sim \delta_k / \delta_\text{out}$ output pixels.


4. Flux Calibration with Heterogeneous Reference Stars

4.1 The Zero-Point Model

Each calibrated image $k$ has an instrumental zero-point $Z_k$ such that for any source $j$ with standard magnitude $m_j$:

$$m_j = Z_k - 2.5 \log_{10} F_{k,j} + \epsilon_k \cdot c_j + k_{\text{ext},k} \cdot X_{k,j}$$

where $F_{k,j}$ is the instrumental flux (ADU/s) of source $j$ in image $k$, $\epsilon_k$ is the color term, $c_j = (G_\text{BP} - G_\text{RP})j$ is the Gaia color, and $k$ is the atmospheric extinction term.},k} X_{k,j

4.2 The Heterogeneous Calibration Star Problem

The set of usable calibration stars $\mathcal{S}_k$ differs between images because:

  1. Saturation: Stars with $G < G_{\text{sat},k}$ are saturated in image $k$. The saturation limit depends on aperture and exposure time: $G_{\text{sat},k} \approx -2.5 \log_{10}(\text{FWC}k / (g_k \cdot t_k))$ where FWC is the full-well capacity. For a 300mm f/4 with 60s exposure and gain 1: $G\text{sat} \approx 11$. For an 80mm f/5 with 30s: $G_\text{sat} \approx 9$.

  2. Detection limit: Stars with $G > G_{\text{lim},k}$ are undetected in image $k$. The $5\sigma$ detection limit scales as $G_{\text{lim},k} \approx Z_k - 2.5 \log_{10}(5\sigma_k)$. Typical: $G_\text{lim} \approx 16$--18 for 80mm, 18--20 for 300mm scopes.

  3. Overlap set: The calibration stars usable for image $k$ are $\mathcal{S}k = {j : G},k} < G_j < G_{\text{lim},k}}$. Two images $k$ and $k'$ may have $\mathcal{Sk \cap \mathcal{S} = \emptyset$ in the worst case (no common calibration stars).

Does this introduce systematic flux offsets? Yes, potentially. If image $k$ is calibrated against stars with $14 < G < 18$ and image $k'$ against stars with $10 < G < 14$, the two zero-points are anchored to different stellar populations. Systematic offsets arise from:

  • Color distribution differences: the faint calibration set is redder on average (more M-dwarfs)
  • Polynomial color correction residuals: the Gaia-to-standard polynomial has $\sim$10--20 mmag residuals that are color-dependent
  • Crowding/blending: faint stars near the detection limit are more susceptible to contamination from unresolved neighbours

Expected systematic offset: 5--20 mmag if zero-points are fit independently. This is unacceptable for co-addition targeting $< 5$ mmag photometric accuracy.

4.3 Ensemble Calibration: Simultaneous Zero-Point Solution

The solution is to fit all zero-points simultaneously using all available calibration stars across all images, enforcing that the same star has the same standard magnitude regardless of which image it appears in.

Linear model. Let $m_{\text{inst},k,j}$ be the instrumental magnitude of star $j$ in image $k$, measured only when $j \in \mathcal{S}_k$. The model is:

$$m_{\text{inst},k,j} = M_j - Z_k - \epsilon_k c_j - k_{\text{ext},k} X_{k,j} + \nu_{k,j}$$

where $M_j$ is the true standard magnitude of star $j$, and $\nu_{k,j}$ is the measurement noise with variance $\sigma_{k,j}^2$.

Stack the observations into a vector $\mathbf{d}$ of length $N_\text{obs} = \sum_k |\mathcal{S}k|$, and the unknowns into a vector $\boldsymbol{\theta} = (M_1, \ldots, M_J, Z_1, \ldots, Z_K, \epsilon_1, \ldots, \epsilon_K, k)^T$.},1}, \ldots, k_{\text{ext},K

The design matrix $\mathbf{A}$ has one row per observation $(k, j)$:

$$A_{(k,j), M_j} = +1, \quad A_{(k,j), Z_k} = -1, \quad A_{(k,j), \epsilon_k} = -c_j, \quad A_{(k,j), k_{\text{ext},k}} = -X_{k,j}$$

All other entries are zero.

Prior on $M_j$ from Gaia: For stars with Gaia photometry, impose a Gaussian prior $M_j \sim \mathcal{N}(\hat{M}j^\text{Gaia}, \sigma},j}^2)$ where $\hat{Mj^\text{Gaia}$ is the Gaia-derived standard magnitude and $\sigma$ is its uncertainty (typically 1--10 mmag for $G < 16$).},j

Weighted least-squares solution:

$$\hat{\boldsymbol{\theta}} = \left(\mathbf{A}^T \mathbf{W} \mathbf{A} + \mathbf{\Lambda}\right)^{-1} \left(\mathbf{A}^T \mathbf{W} \mathbf{d} + \mathbf{\Lambda} \boldsymbol{\mu}\right)$$

where $\mathbf{W} = \text{diag}(1/\sigma_{k,j}^2)$ is the observation weight matrix, $\mathbf{\Lambda}$ is the prior precision matrix (diagonal, with entries $1/\sigma_{\text{Gaia},j}^2$ for the $M_j$ block and zero for the instrument parameters), and $\boldsymbol{\mu}$ is the prior mean vector ($\hat{M}_j^\text{Gaia}$ for the $M_j$ block, zero elsewhere).

Degeneracy: Without the Gaia prior, there is a global degeneracy: shifting all $M_j$ by $\Delta$ and all $Z_k$ by $\Delta$ leaves the residuals unchanged. The Gaia prior breaks this degeneracy. In the limit of very precise Gaia data ($\sigma_\text{Gaia} \to 0$), the $M_j$ are fixed and the problem reduces to fitting $Z_k$, $\epsilon_k$, $k_{\text{ext},k}$ independently per image (the standard approach). The ensemble method is most valuable when Gaia uncertainties are comparable to instrumental uncertainties (faint stars, sparse fields).

Computational cost: The system has $J + 3K$ unknowns. For 50 calibration stars and 20 images: 110 unknowns, $\sim$500 observations. Solvable by direct Cholesky factorisation in $< 1$ ms.

4.4 Bridging the Calibration Gap

When $\mathcal{S}k \cap \mathcal{S} = \emptyset$ (no common calibration stars between two images), the ensemble solution still works as long as the graph of shared stars is connected. Define the calibration graph: nodes are images, edges connect images sharing $\geq 1$ calibration star. If this graph is connected, all zero-points are determined.

If the graph is disconnected (e.g., a very small 80mm scope shares no stars with a very large 300mm scope), the Gaia priors bridge the gap: both images are independently anchored to the Gaia system. The residual systematic is then bounded by $\sqrt{\sigma_{\text{Gaia},j}^2 + \sigma_{\text{poly}}^2}$ where $\sigma_\text{poly} \sim 10$--20 mmag is the Gaia-to-standard polynomial uncertainty.


5. Optimal Weighting for Co-addition

5.1 General Co-addition Formula

After PSF homogenisation, resampling to the common grid (pixel scale $\delta_\text{out}$), and flux calibration (all images on a common zero-point), the co-added image at output pixel $\mathbf{p}$ is:

$$C[\mathbf{p}] = \frac{\sum_{k=1}^{K} w_k[\mathbf{p}] \cdot I_k^\text{cal}[\mathbf{p}]}{\sum_{k=1}^{K} w_k[\mathbf{p}]}$$

where $I_k^\text{cal}[\mathbf{p}]$ is the calibrated, homogenised, resampled image and $w_k[\mathbf{p}]$ is the weight assigned to image $k$ at output pixel $\mathbf{p}$. The weight is zero if pixel $\mathbf{p}$ falls outside the footprint of image $k$.

5.2 Case (a): Maximum SNR on a Point Source (Matched Filter)

For a point source with flux $F_k$ (in the calibrated system) in image $k$, the SNR contribution from image $k$ is:

$$\text{SNR}k = \frac{F_k \cdot \sum\mathbf{p} P_\text{target}[\mathbf{p}]^2 / \sigma_k^2[\mathbf{p}]}{\sqrt{\sum_\mathbf{p} P_\text{target}[\mathbf{p}]^2 / \sigma_k^2[\mathbf{p}]}}$$

For uniform noise per image ($\sigma_k[\mathbf{p}] = \sigma_k$ for all $\mathbf{p}$), the matched-filter optimal weight is:

$$w_k^\text{(MF)} \propto \frac{F_k^2}{\sigma_k^2} \cdot P_\text{target}[\mathbf{p}]$$

Since $P_\text{target}$ is the same for all images (after homogenisation), the PSF factor cancels in the ratio of weights:

$$\boxed{w_k^\text{(MF)} = \frac{F_k^2}{\sigma_k^2}}$$

where $F_k$ is the point-source flux in image $k$ (proportional to $g_k$) and $\sigma_k^2$ is the per-pixel noise variance.

For photon-noise dominated images: $\sigma_k^2 \propto F_k$, so $w_k^\text{(MF)} \propto F_k \propto D_k^2 t_k \eta_k$.

For scintillation-dominated images: $\sigma_k^2 = \epsilon_{\text{scint},k}^2 F_k^2 \propto D_k^{-4/3} F_k^2$, so $w_k^\text{(MF)} \propto D_k^{4/3}$, which is exactly the $D^{4/3}$ weighting derived in the SNR theory document.

5.3 Case (b): Minimum Noise on Extended Structure (Inverse-Variance Weighting)

For extended emission filling many pixels, the optimal co-addition weight minimises the per-pixel noise variance in the output:

$$\text{Var}[C[\mathbf{p}]] = \frac{\sum_k w_k^2 \sigma_k^2}{(\sum_k w_k)^2}$$

Minimising this with respect to $w_k$ gives:

$$\boxed{w_k^\text{(IVW)} = \frac{1}{\sigma_k^2}}$$

This is the standard inverse-variance weighting. It differs from the matched-filter weight by a factor of $F_k^2$: matched-filter weighting up-weights brighter images (larger apertures), while inverse-variance weighting treats all images equally per unit noise.

5.4 Case (c): Minimum PSF in the Co-add

It is tempting to seek weights that produce a narrower effective PSF in the co-add. However, after PSF homogenisation, all images share the same $P_\text{target}$. The effective PSF of the co-add is:

$$P_\text{eff}[\mathbf{x}] = \frac{\sum_k w_k P_\text{target}[\mathbf{x}]}{\sum_k w_k} = P_\text{target}[\mathbf{x}]$$

regardless of the weights. Therefore, no linear weighting scheme can produce a co-add PSF narrower than $P_\text{target}$.

To achieve a narrower effective PSF in the co-add, one must perform deconvolution (a nonlinear operation) on the co-added image, or use the multi-stack approach (Section 2.3) where the high-resolution stack has a narrower $P_\text{target}$.

5.5 Reconciling Scintillation and Photon Weights

The matched-filter weight in the general case (both photon and scintillation noise) is:

$$w_k = \frac{F_k^2}{\sigma_{\text{phot},k}^2 + \sigma_{\text{scint},k}^2} = \frac{F_k^2}{F_k + \epsilon_{\text{scint},k}^2 F_k^2}$$

Dividing numerator and denominator by $F_k^2$:

$$w_k = \frac{1}{F_k^{-1} + \epsilon_{\text{scint},k}^2} = \frac{1}{F_k^{-1} + C_0^2 D_k^{-4/3}}$$

In the photon-dominated limit ($F_k^{-1} \gg C_0^2 D_k^{-4/3}$, i.e., faint sources):

$$w_k \to F_k \propto D_k^2 t_k \eta_k$$

In the scintillation-dominated limit ($C_0^2 D_k^{-4/3} \gg F_k^{-1}$, i.e., bright sources):

$$w_k \to \frac{D_k^{4/3}}{C_0^2}$$

The crossover occurs at source flux $F_k^\ast = D_k^{4/3} / C_0^2$. For a 200mm telescope at a typical site ($C_0 \approx 3 \times 10^{-3}$): $F_k^\ast \approx 200^{4/3} / (9 \times 10^{-6}) \approx 1.3 \times 10^8$ ADU/s, corresponding to $V \approx 10$--11 (consistent with the SNR theory estimate of the scintillation saturation regime).

The two weighting schemes agree for all sources when a single noise model is used. The $D^{4/3}$ rule is a special case of matched-filter weighting in the scintillation limit. For a heterogeneous network observing targets of varying brightness, use the full expression $w_k = F_k^2 / \sigma_k^2$ with $\sigma_k^2$ computed from the measured per-image noise (which automatically captures the dominant noise regime).


6. Noise Propagation in the Co-add

6.1 Variance of the Co-added Pixel

From the co-addition formula in Section 5.1, the variance at output pixel $\mathbf{p}$ is:

$$\text{Var}[C[\mathbf{p}]] = \frac{\sum_{k} w_k^2 \cdot \text{Var}[I_k^\text{cal}[\mathbf{p}]]}{(\sum_k w_k)^2}$$

This assumes the noise between different images is uncorrelated (valid for geographically distributed telescopes; see SNR theory, Section 4.3 on scintillation independence).

6.2 Noise Correlation After Resampling

After resampling image $k$ from native pixel scale $\delta_k$ to output scale $\delta_\text{out}$ using kernel $R$, the noise covariance between output pixels $\mathbf{p}$ and $\mathbf{p}'$ is:

$$\text{Cov}[I_k^\text{res}[\mathbf{p}], I_k^\text{res}[\mathbf{p}']] = \sigma_k^2 \left(\frac{\delta_k}{\delta_\text{out}}\right)^2 \sum_{\mathbf{m}} R!\left(\frac{\mathbf{x}(\mathbf{p}) - \mathbf{x}_k(\mathbf{m})}{\delta_k}\right) R!\left(\frac{\mathbf{x}(\mathbf{p}') - \mathbf{x}_k(\mathbf{m})}{\delta_k}\right)$$

Define the noise correlation kernel of image $k$ in the output grid as:

$$\Phi_k[\Delta\mathbf{p}] = \frac{\text{Cov}[I_k^\text{res}[\mathbf{p}], I_k^\text{res}[\mathbf{p} + \Delta\mathbf{p}]]}{\text{Var}[I_k^\text{res}[\mathbf{p}]]}$$

For Lanczos-3 resampling with ratio $r = \delta_k / \delta_\text{out}$:

  • $r = 1$ (same scale): $\Phi_k$ is nearly a delta function (correlation length $< 1$ output pixel)
  • $r = 2$: $\Phi_k$ has correlation length $\sim 2$ output pixels, with $\Phi_k[\pm 1] \approx 0.3$
  • $r = 3$: $\Phi_k$ has correlation length $\sim 3$ output pixels, with $\Phi_k[\pm 1] \approx 0.5$, $\Phi_k[\pm 2] \approx 0.15$

The PSF homogenisation step introduces additional correlation: convolution with kernel $K_k$ correlates the noise over a scale $\sim \text{FWHM}(K_k) / \delta_k$ native pixels, or equivalently $\sim \text{FWHM}(K_k) / \delta_\text{out}$ output pixels.

6.3 The Effective Correlation Kernel

The combined correlation from homogenisation (kernel $K_k$) and resampling (kernel $R$) is the autocorrelation of the composite kernel $K_k \ast R$ evaluated on the output grid:

$$\Phi_k^\text{total}[\Delta\mathbf{p}] = \frac{(K_k \star K_k) \ast (R \star R)|{\Delta\mathbf{p} \cdot \delta\text{out}}}{(K_k \star K_k) \ast (R \star R)|_{\mathbf{0}}}$$

where $\star$ denotes autocorrelation: $f \star f(\mathbf{x}) = \int f(\mathbf{y}) f(\mathbf{y} + \mathbf{x}) d^2\mathbf{y}$.

6.4 Impact on Aperture Photometry

When performing aperture photometry on the co-add, the flux is summed over $N_\text{ap}$ output pixels within a circular aperture:

$$F_\text{ap} = \sum_{\mathbf{p} \in \text{aperture}} C[\mathbf{p}]$$

Naive variance estimate (ignoring correlations): $$\text{Var}\text{naive}[F\text{ap}] = \sum_{\mathbf{p} \in \text{aperture}} \text{Var}[C[\mathbf{p}]] = N_\text{ap} \cdot \bar{\sigma}_C^2$$

True variance (including correlations): $$\text{Var}\text{true}[F\text{ap}] = \sum_{\mathbf{p}, \mathbf{p}' \in \text{aperture}} \text{Cov}[C[\mathbf{p}], C[\mathbf{p}']]$$

Define the noise correlation factor $\beta$ as:

$$\beta = \frac{\text{Var}\text{true}[F\text{ap}]}{\text{Var}\text{naive}[F\text{ap}]} = 1 + \frac{1}{N_\text{ap}} \sum_{\mathbf{p} \neq \mathbf{p}'} \Phi_C[\mathbf{p} - \mathbf{p}']$$

where $\Phi_C$ is the noise correlation function of the co-add:

$$\Phi_C[\Delta\mathbf{p}] = \frac{\sum_k w_k^2 \text{Var}_k \cdot \Phi_k^\text{total}[\Delta\mathbf{p}]}{\sum_k w_k^2 \text{Var}_k}$$

Typical values of $\beta$:

  • Co-add of well-sampled images ($\delta_k \leq \delta_\text{out}$, minimal homogenisation): $\beta \approx 1.0$--1.1
  • Co-add dominated by upsampled coarse images ($\delta_k = 3\delta_\text{out}$): $\beta \approx 1.5$--2.5
  • Co-add with aggressive PSF homogenisation (kernel FWHM $\sim 5$ output pixels): $\beta \approx 2.0$--4.0

Naively summing pixel variances underestimates the true photometric noise by a factor of $\sqrt{\beta}$, which can be a factor of $\sim$1.5--2 for typical OpenAstro co-adds. This must be corrected in all downstream photometric uncertainty estimates.

6.5 Practical Measurement of $\beta$

Rather than computing $\beta$ analytically, measure it empirically:

  1. Place apertures on empty-sky regions of the co-add (no sources)
  2. Compute the variance of the aperture sums across many positions
  3. Compare to the sum of the per-pixel variance map values within the same apertures
  4. The ratio is $\beta$

This captures all sources of correlation (resampling, homogenisation, flat-field residuals) without requiring analytic modeling of each.

6.6 FITS Header Requirements for the Co-add

The following keywords must be written to the co-add FITS header for downstream pipelines to compute correct photometric uncertainties:

Keyword Type Description
NCOADD int Number of input images in the co-add
EFFTIME float Effective exposure time at field centre (seconds)
PSFHFWHM float FWHM of the homogenised PSF $P_\text{target}$ (arcsec)
PIXSCALE float Output pixel scale $\delta_\text{out}$ (arcsec/pixel)
NCORFAC float Noise correlation factor $\beta$ (dimensionless)
NCORRAD float Noise correlation length (output pixels)
BUNIT str Physical units of pixel values (e.g., 'ADU/s')
PHOTZP float Photometric zero-point of the co-add (mag)
PHOTZPE float Uncertainty in PHOTZP (mag)
SATURATE float Saturation level in the co-add (same units as BUNIT)
GAIN_EFF float Effective gain for Poisson noise computation (e$^-$/ADU)
WHTTYPE str Weighting scheme used ('MATCHED_FILTER', 'INV_VARIANCE', 'SCINTILLATION')
COVMAP str Filename of the coverage/weight map HDU extension

Additionally, store the per-pixel variance map and the coverage map as separate HDU extensions in the same FITS file. The variance map should include the correlation correction (multiply by $\beta$) or store $\beta$ separately and let downstream tools apply it.


7. Coverage Map and Effective Exposure Time

7.1 Definitions

The coverage map (unnormalised weight sum) at output pixel $\mathbf{p}$ is:

$$\text{COV}[\mathbf{p}] = \sum_{k=1}^{K} w_k[\mathbf{p}]$$

The normalised weight map (used for the weighted mean) is:

$$W_\text{norm}[\mathbf{p}] = \frac{\text{COV}[\mathbf{p}]}{\text{median}(\text{COV})}$$

The effective exposure time maps the coverage to an equivalent single-telescope exposure:

$$t_\text{eff}[\mathbf{p}] = \frac{\text{COV}[\mathbf{p}]}{\text{median}(\text{COV})} \cdot t_\text{eff,centre}$$

where $t_\text{eff,centre}$ is the effective exposure time at the field centre, defined by the condition that the co-add noise at the field centre equals the noise of a single image with exposure $t_\text{eff,centre}$.

7.2 Noise Scaling with Coverage

In the photon-noise limit:

The per-pixel noise in the co-add scales as:

$$\sigma_C[\mathbf{p}] \propto \frac{1}{\sqrt{\text{COV}[\mathbf{p}]}} \propto \frac{1}{\sqrt{t_\text{eff}[\mathbf{p}]}}$$

This is the familiar $1/\sqrt{t}$ scaling. Doubling the coverage halves the noise variance.

In the scintillation limit:

Each image contributes a scintillation noise $\sigma_{\text{scint},k} = \epsilon_{\text{scint},k} \cdot F_k$ that does not decrease with exposure time (for $t > t_\text{sat}$). With $K_\text{eff}[\mathbf{p}]$ images covering pixel $\mathbf{p}$, the co-add scintillation noise is:

$$\sigma_{C,\text{scint}}[\mathbf{p}] = \frac{\sqrt{\sum_k w_k^2 \sigma_{\text{scint},k}^2}}{\sum_k w_k}$$

For equal weights and identical telescopes: $\sigma_{C,\text{scint}} \propto 1/\sqrt{K_\text{eff}}$, i.e., the noise scales as $1/\sqrt{N}$ where $N$ is the number of independent images, not $1/\sqrt{t}$.

The crucial distinction: In the photon limit, a single long exposure of $t = N \cdot t_1$ achieves the same noise as $N$ short exposures of $t_1$. In the scintillation limit, a single exposure of $t = N \cdot t_1$ has noise $\propto (Nt_1)^{-1/2}$ (from the $t^{-1/2}$ in the Dravins formula), while $N$ independent exposures of $t_1$ from different telescopes have noise $\propto t_1^{-1/2} / \sqrt{N}$. These are the same scaling -- but only if the scintillation in the long exposure is uncorrelated over the integration time. For a single telescope, temporal correlations in scintillation (coherence time $\sim 10$--100 ms) mean the effective number of independent realisations in a $t$-second exposure is $\sim t / \tau_\text{scint}$, giving the $t^{-1/2}$ scaling.

For the distributed network, the $N$ images represent $N$ truly independent atmospheric realisations (different sites), so the $1/\sqrt{N}$ scaling is exact. Therefore:

$$\text{Noise in co-add} \propto \begin{cases} 1/\sqrt{t_\text{eff}} & \text{photon-noise limit} \ 1/\sqrt{K_\text{eff}} & \text{scintillation limit (same exposure time per image)} \end{cases}$$

The effective exposure time $t_\text{eff}$ does not capture the scintillation benefit of having more independent telescopes. For scintillation-limited science, report $K_\text{eff}[\mathbf{p}]$ (the effective number of independent images) as the primary depth indicator, not $t_\text{eff}$.


8. Effective PSF of the Co-add

8.1 Spatially Varying Effective PSF

After PSF homogenisation, the effective PSF of the co-add at position $\mathbf{p}$ is:

$$P_\text{eff}[\mathbf{x}; \mathbf{p}] = \frac{\sum_k w_k[\mathbf{p}] \cdot P_k^\text{hom}[\mathbf{x}]}{\sum_k w_k[\mathbf{p}]}$$

If all images were perfectly homogenised to $P_\text{target}$, then $P_k^\text{hom} = P_\text{target}$ for all $k$ and $P_\text{eff} = P_\text{target}$ everywhere.

In practice, $P_\text{eff}$ varies across the field because:

  1. Incomplete coverage: Some images do not cover the field edges, so $w_k[\mathbf{p}] = 0$ for those images at those positions. If the excluded images have systematically different PSFs (e.g., the narrow-field, high-resolution images cover only the centre), $P_\text{eff}$ will be narrower at the centre and broader at the edges.

  2. Imperfect homogenisation: PSF homogenisation kernels are computed from isolated stars in each image, which may not perfectly model the PSF variation across the field (due to optical aberrations, tracking errors, etc.). Residual PSF differences at the $\sim$5--10% level are typical.

  3. Weight variation: If the weight map has spatial structure (e.g., from vignetting, chip gaps, or masked cosmic rays), $P_\text{eff}$ inherits that structure.

8.2 Empirical Measurement of $P_\text{eff}$

Measure $P_\text{eff}$ directly from stars in the co-add:

  1. Select isolated, unsaturated stars across the co-add field (typically 20--100 stars, spanning the full field)
  2. Extract a postage stamp around each star (size $\geq 5 \times$ FWHM)
  3. Fit a 2D model (Gaussian, Moffat, or empirical) to each star's profile
  4. Map the FWHM, ellipticity, and position angle as functions of field position
  5. Interpolate to produce $P_\text{eff}[\mathbf{x}; \mathbf{p}]$ at any field position

For Moffat profiles: $P(r) = P_0 (1 + (r/\alpha)^2)^{-\beta}$, where FWHM $= 2\alpha\sqrt{2^{1/\beta} - 1}$. The Moffat model captures the extended wings better than a Gaussian.

8.3 Impact of PSF Variation on Aperture Photometry

If aperture photometry uses a fixed aperture radius $r_\text{ap}$ across the field, but $P_\text{eff}$ varies, the fraction of flux captured within the aperture varies:

$$f(\mathbf{p}) = \int_0^{r_\text{ap}} P_\text{eff}(r; \mathbf{p}) \cdot 2\pi r \, dr$$

The photometric bias from ignoring PSF variation is:

$$\Delta m(\mathbf{p}) = -2.5 \log_{10}!\left(\frac{f(\mathbf{p})}{f(\mathbf{p}_\text{ref})}\right)$$

where $\mathbf{p}_\text{ref}$ is a reference position (e.g., field centre) where the aperture correction was calibrated.

Correction formula: For each source at position $\mathbf{p}$ in the co-add:

$$m_\text{corrected}(\mathbf{p}) = m_\text{measured}(\mathbf{p}) + 2.5 \log_{10}!\left(\frac{f(\mathbf{p})}{f(\mathbf{p}_\text{ref})}\right)$$

For a Gaussian PSF where FWHM varies by $\pm 10\%$ across the field, and aperture radius $r_\text{ap} = 1.5 \times \text{FWHM}_\text{ref}$:

$$\Delta f / f \approx 2 \times (\Delta\text{FWHM}/\text{FWHM}) \times \exp(-4\ln 2 \cdot (1.5)^2) \approx 2 \times 0.1 \times 0.067 = 0.013$$

This is $\Delta m \approx 14$ mmag -- significant for precision photometry. The correction is essential.

Best practice: Use PSF-fitting photometry (DAOPHOT/photutils PSF photometry) on the co-add, with a spatially varying PSF model. This automatically accounts for PSF variation and eliminates the aperture correction entirely.


9. Implementation Path

9.1 Software Stack

Component Tool Role
FITS I/O astropy.io.fits Read/write FITS files and headers
WCS handling astropy.wcs Parse and transform World Coordinate System headers
Reprojection reproject (Robitaille et al. 2020) WCS-aware resampling to common grid; supports Lanczos, bilinear, and flux-conserving methods
PSF measurement photutils.psf + astropy.modeling Fit 2D PSF models (Gaussian, Moffat) to stars in each image
PSF homogenisation Custom (scipy.signal.fftconvolve) Compute and apply homogenisation kernels $K_k$ via FFT convolution
Source extraction sep (Barbary 2016) or photutils.detection Detect sources for calibration star matching and quality control
Gaia cross-match astroquery.gaia + local cache Query Gaia DR3 for calibration stars; cache results per field
Ensemble calibration numpy.linalg.lstsq or scipy.sparse.linalg Solve the linear system in Section 4.3
Co-addition Custom weighted mean Apply weights, compute coverage map, variance map, and co-add
Aperture photometry photutils.aperture or sep Measure fluxes on the co-add with noise correlation correction
Alternative full pipeline SWarp (Bertin et al. 2002) Handles reprojection, background subtraction, and co-addition in a single tool; less flexible for custom weighting but much faster for large mosaics

9.2 Pipeline Execution Order

For each science target field:
  1. Ingest FITS files, validate WCS headers (astrometry.net if missing)
  2. For each image k:
     a. Measure PSF: fit Moffat profile to 10+ isolated stars -> FWHM_k, beta_k
     b. Measure per-pixel noise: sigma_k from background RMS in source-free regions
     c. Compute weight: w_k = F_ref^2 / sigma_k^2 (or D_k^{4/3} as initial estimate)
  3. Choose P_target: FWHM_target = max(FWHM_k) * 1.1
     - If multi-stack: partition into resolution classes, set per-class P_target
  4. For each image k:
     a. Compute homogenisation kernel K_k (Wiener deconvolution in Fourier space)
     b. Apply K_k via FFT convolution (on native grid)
     c. Reproject to common WCS grid using reproject (Lanczos-3)
     d. Flux calibrate: solve ensemble zero-points (Section 4.3)
     e. Scale to common zero-point: I_k_cal = I_k_res * 10^(0.4 * (ZP_common - ZP_k))
  5. Co-add: C = sum(w_k * I_k_cal) / sum(w_k)
  6. Compute auxiliary products:
     a. Coverage map: COV = sum(w_k)
     b. Variance map: VAR = sum(w_k^2 * Var_k) / sum(w_k)^2
     c. Noise correlation factor beta (empirical, Section 6.5)
     d. Effective PSF map (from stars in co-add, Section 8.2)
  7. Write co-add FITS with all header keywords (Section 6.6)

9.3 Compute Cost Estimate

For OpenAstro's expected volume (10--50 images per co-add, image size $\sim 4000 \times 4000$ pixels, up to 10 fields per night):

Step Per-image cost Total (20 images)
PSF measurement (Moffat fit to 20 stars) ~0.5 s 10 s
PSF homogenisation (FFT convolve 4k$\times$4k) ~0.3 s 6 s
Reprojection (Lanczos-3 resample) ~1.0 s 20 s
Ensemble calibration (110-variable linear solve) — <0.01 s
Flux scaling ~0.1 s 2 s
Co-addition (weighted mean) ~0.2 s 4 s (single pass)
Variance map computation ~0.2 s 4 s
PSF map (fit 50 stars in co-add) — ~5 s
Total per field ~50 s
Total per night (10 fields) ~8 min

These estimates assume a single-core Python implementation on a modern CPU (e.g., AMD Ryzen 7, 4 GHz). With NumPy/SciPy FFT and reproject's compiled backends, these are conservative. Parallelisation across images (trivially parallel for steps 2--4) reduces wall time to $\sim$15 s per field on a 4-core machine.

Memory: Each 4k$\times$4k float32 image is 64 MB. Holding 20 images in memory requires 1.3 GB -- feasible on any modern machine. For 50-image co-adds, either process in batches (running sum) or use $\sim$3.2 GB.

SWarp alternative: SWarp processes $\sim$10$\times$ faster than a pure Python pipeline for the reprojection and co-addition steps (compiled C, optimised memory access). A 20-image co-add completes in $\sim$5 s. However, SWarp does not natively support PSF homogenisation or the ensemble calibration described here -- those steps must be performed as pre-processing.

9.4 Validation Tests

Before deploying the pipeline on science data, validate against known inputs:

  1. Inject-and-recover: Insert synthetic point sources of known flux and position into raw images, run the full pipeline, measure recovered flux and position. Require $< 1\%$ flux error and $< 0.1$ pixel positional error.
  2. Noise whiteness: Compute the 2D power spectrum of the co-add variance map in source-free regions. Verify it matches the predicted correlation function $\Phi_C$ (Section 6).
  3. PSF consistency: Compare the measured co-add PSF (from stars) with the expected $P_\text{target}$. Require FWHM agreement to $< 5\%$.
  4. Photometric scatter: For a field with many Gaia stars, compare co-add photometry to Gaia. The scatter should match the predicted $\sigma_C$ (from the variance map, with $\beta$ correction).

References

  • Bertin, E. et al. (2002), ASP Conf. Ser., 281, 228 -- SWarp: resampling and co-adding FITS images
  • Dravins, D. et al. (1998), PASP, 110, 610 -- scintillation theory and aperture dependence
  • Evans, D.W. et al. (2018), A&A, 616, A4 -- Gaia DR2 photometric calibration
  • Fruchter, A.S. & Hook, R.N. (2002), PASP, 114, 144 -- Drizzle algorithm for linear image reconstruction
  • Henden, A.A. & Kaitchuck, R.H. (1990), Astronomical Photometry -- standard photometric methods
  • Kjeldsen, H. & Frandsen, S. (1992), PASP, 104, 413 -- SNR formulation for stellar photometry
  • Pont, F. et al. (2006), MNRAS, 373, 231 -- correlated noise in transit photometry
  • Riello, M. et al. (2021), A&A, 649, A3 -- Gaia EDR3/DR3 photometric system
  • Robitaille, T. et al. (2020), reproject v0.7, Astropy-affiliated package for image reprojection
  • Stetson, P.B. (1987), PASP, 99, 191 -- DAOPHOT: PSF photometry in crowded fields
  • Tamuz, O., Mazeh, T. & Zucker, S. (2005), MNRAS, 356, 1466 -- SysRem algorithm for systematic corrections
  • Young, A.T. (1967), AJ, 72, 747 -- original scintillation formula