Skip to content

Alert Prioritisation as Multi-Objective Optimisation

A rigorous formalisation of the OpenAstro scheduling problem, from mathematical definition through implementable greedy algorithm to worked numerical example. This document supersedes ad-hoc priority scoring described elsewhere and provides the theoretical foundation for the production scheduler.


1. Problem Formalisation

1.1 Inputs

Targets. A set of active targets T = {t_1, ..., t_n}, where each target t_i is described by the tuple:

t_i = (
    ra_i,           # Right ascension (degrees, ICRS J2000)
    dec_i,          # Declination (degrees)
    mag_i,          # Current apparent magnitude (or None if unknown)
    sc_i,           # Science case ∈ {grb, occultation, ttv, microlensing, flare, neo, supernova, routine}
    pc_i,           # Priority class ∈ {critical, high, medium, low}
    [ts_i, te_i],   # Observation window: start and end (UTC)
    exp_i,          # Required total exposure time (seconds)
    F_i,            # Required filter set ⊆ {B, V, R, I, g, r, i, z, clear}
    k_min_i,        # Minimum number of nodes required for science validity
    k_sat_i         # Saturation count: additional nodes beyond this add negligible value
)

Nodes. A set of active nodes N = {n_1, ..., n_m}, where each node n_j is described by:

n_j = (
    lat_j, lon_j,      # Geographic coordinates (degrees)
    ap_j,               # Aperture (mm)
    cam_j,              # Camera descriptor (pixel scale, FoV, filter wheel contents)
    sky_j,              # Sky conditions score ∈ [0, 1] (1 = photometric, 0 = overcast)
    rel_j,              # Reliability score ∈ [0, 1] (rolling success rate)
    a_j                 # Current assignment: target index or null
)

1.2 Decision Variables

The assignment matrix A ∈ {0, 1}^{n x m}, where:

A[i, j] = 1   iff node n_j is assigned to target t_i
A[i, j] = 0   otherwise

For each target t_i, define the assigned node set:

N_i = { n_j ∈ N : A[i, j] = 1 }

1.3 Constraints

(C1) Single assignment. Each node is assigned to at most one target per scheduling cycle:

∀ j ∈ {1, ..., m}:  Σ_i A[i, j] ≤ 1

(C2) Visibility. Node n_j can only observe target t_i if the target is above the horizon and below maximum airmass at node j's location during the observation window:

A[i, j] = 1  ⟹  airmass(ra_i, dec_i, lat_j, lon_j, t) < X_max  for some t ∈ [ts_i, te_i]

where X_max = 2.5 (corresponding to altitude ~23.6 degrees above the horizon). In practice, prefer X_max = 2.0 (altitude ~30 degrees).

Visibility is computed via Astropy's AltAz transform for the midpoint of the observation window; a target is considered observable if its altitude exceeds 30 degrees at any point during [ts_i, te_i].

(C3) Minimum coverage. Target t_i requires at least k_min_i observers for the observation to have science validity:

|N_i| ≥ k_min_i   or   |N_i| = 0   (don't assign fewer than the minimum; it wastes resources)

This is an all-or-nothing constraint: either a target gets enough nodes or it gets none. In practice, this is relaxed for non-critical targets where partial data still has value (e.g., supernova monitoring). The hard version applies to occultations and microlensing parallax.

(C4) Filter compatibility. Node n_j must have at least one of the required filters for target t_i:

A[i, j] = 1  ⟹  cam_j.filters ∩ F_i ≠ ∅

(C5) Aperture adequacy. The target's magnitude must be detectable by the node's aperture with sufficient SNR within the required exposure:

A[i, j] = 1  ⟹  SNR(mag_i, ap_j, exp_i, sky_j) ≥ SNR_min(sc_i)

where SNR_min varies by science case (e.g., 20 for photometry, 5 for detection-only GRB follow-up).

1.4 Objective

Maximise the total science value across all targets:

maximise  Σ_{i=1}^{n}  V(t_i, N_i)

subject to constraints (C1)–(C5)

where V(t_i, N_i) is the science value function defined in Section 2.


2. Science Value Function

2.1 Functional Form

The science value of assigning a set of nodes N_i to target t_i is:

V(t_i, N_i) = B(sc_i, pc_i, mag_i) × U(sc_i, Δt_i) × D(sc_i, N_i) × S(|N_i|, k_min_i, k_sat_i) × R(N_i)

where:

  • B = base priority (intrinsic science importance)
  • U = urgency factor (time decay since alert)
  • D = geographic diversity factor (spatial spread of assigned nodes)
  • S = saturation factor (diminishing returns with node count)
  • R = reliability factor (aggregate probability of success)

Each factor is defined below.

2.2 Base Priority B(sc, pc, mag)

B(sc, pc, mag) = B_science(sc) × B_class(pc) × B_brightness(mag)

Science case base values B_science:

Science case (sc) B_science Rationale
grb 10.0 Highest urgency, rarest events, strongest publication potential
occultation 9.0 Distribution is the unique advantage; precise timing critical
ttv 7.0 Long-term campaign value; each transit adds to a growing dataset
microlensing 8.0 Time-critical, parallax requires geographic spread
flare (M-dwarf) 6.0 High-cadence photometry; useful but less rare
neo 5.0 Astrometric follow-up; useful for community but not unique to us
supernova 4.0 Light curve monitoring; many groups do this
routine 1.0 Gap-filler observations

Priority class multiplier B_class:

Priority class (pc) B_class
critical 2.0
high 1.5
medium 1.0
low 0.5

Brightness accessibility B_brightness:

B_brightness(mag) =
    1.2    if mag < 12   (easy for all apertures)
    1.0    if 12 ≤ mag < 16
    0.8    if 16 ≤ mag < 18
    0.5    if 18 ≤ mag < 20
    0.2    if mag ≥ 20 or mag is None

2.3 Urgency Factor U(sc, delta_t)

The urgency factor captures how the science value of an observation decays with time since alert. Let delta_t = t_now - t_alert (in minutes).

GRB afterglow:

GRB optical afterglows decay as a power law, typically F ~ t^(-1.2). Observing at 10 minutes captures ~10x more flux than at 100 minutes. We model the urgency as:

U_grb(Δt) = max(0.1, (10 / max(Δt, 1))^0.8 )

This gives: - Δt = 1 min: U = 6.31 (capped at observation start) - Δt = 10 min: U = 1.0 (normalisation point) - Δt = 30 min: U = 0.42 - Δt = 60 min: U = 0.24 - Δt = 120 min: U = 0.14 - Δt > ~500 min: U = 0.1 (floor)

Occultation:

Occultations have a hard observation window. Urgency is binary:

U_occ(Δt) =
    1.0    if t_now ∈ [ts_i - 30 min, te_i]   (within setup + event window)
    0.0    otherwise                             (event missed or not yet relevant)

TTV transit:

Transits also have a hard window, but with a ramp-up for pre-transit baseline:

U_ttv(Δt_start) =
    0.8    if t_now ∈ [ts_i - 60 min, ts_i]       (pre-transit baseline)
    1.0    if t_now ∈ [ts_i, te_i]                  (during transit)
    0.0    otherwise

where ts_i is predicted ingress, te_i is predicted egress.

Microlensing:

Microlensing events evolve over days to weeks. Urgency is modelled as a gentle decay:

U_micro(Δt) = max(0.3, 1.0 - (Δt_days / 30))

where Δt_days = (t_now - t_alert) in days.

M-dwarf flare:

Flares decay on timescales of minutes to hours:

U_flare(Δt) = max(0.1, exp(-Δt / 30))

where Δt is in minutes. Half-life approximately 21 minutes.

NEO / supernova / routine:

U_neo(Δt) = max(0.5, 1.0 - Δt_days / 3)       (NEOs: 3-day window)
U_sn(Δt) = max(0.3, 1.0 - Δt_days / 60)       (supernovae: weeks-long)
U_routine(Δt) = 1.0                              (no time pressure)

2.4 Saturation Factor S(k, k_min, k_sat)

This factor captures the super-additive-then-saturating property: the science value of adding more nodes increases steeply at first, then plateaus.

S(k, k_min, k_sat) =
    0                                        if k < k_min (insufficient coverage)
    ((k - k_min + 1) / (k_sat - k_min + 1))^0.5   if k_min ≤ k ≤ k_sat
    1.0                                      if k > k_sat

The exponent 0.5 (square root) ensures sub-linear growth: doubling the node count less than doubles the value. This is the standard concave-increasing model for diminishing returns.

Super-additivity clarification: The function is super-additive up to k_sat in the following sense: V with k nodes > V with (k-1) nodes by a diminishing but positive marginal amount. Beyond k_sat, the marginal value is zero. This captures the reality that 10 nodes on a TTV transit provides better temporal sampling and redundancy than 2, but 100 nodes adds negligible value over 10.

Science-case parameterisation:

Science case k_min k_sat Rationale
grb 1 5 Any detection has value; 5 nodes give light curve + colour
occultation 3 15 Chord reconstruction needs ≥3 chords; 15 fully defines the shape
ttv 1 8 One node suffices; more improve timing precision via averaging
microlensing 2 6 Parallax needs ≥2 separated nodes; 6 gives excellent baseline coverage
flare 1 4 One detection is useful; 4 gives multi-band simultaneous coverage
neo 1 3 Astrometry from 1 site suffices; 3 gives parallax
supernova 1 5 One light curve is enough; 5 gives multi-band coverage
routine 1 2 Gap-filler; minimal resources

2.5 Geographic Diversity Factor D(sc, N_assigned)

Geographic diversity matters differently for each science case. We define two diversity metrics and combine them per science case.

Longitude spread L(N_assigned):

L(N_assigned) = (lon_max - lon_min) / 360

where lon_max and lon_min are the maximum and minimum longitudes of nodes in N_assigned, adjusted for the wrap-around at 180/-180 degrees by choosing the representation that minimises the range (i.e., compute the angular span of the smallest arc containing all node longitudes).

For a single node: L = 0.

Latitude spread W(N_assigned):

W(N_assigned) = (lat_max - lat_min) / 180

Mean pairwise separation P(N_assigned):

For each pair of nodes (n_a, n_b), compute the great-circle distance d_ab in km. Define:

P(N_assigned) = mean over all pairs { d_ab } / 10000   (normalised by ~quarter Earth circumference)

Capped at 1.0.

Science-case diversity factors:

GRB afterglow: Diversity is beneficial but not essential (any detection has value). Light curves from multiple longitudes extend temporal coverage.

D_grb(N) = 0.6 + 0.4 × L(N)

This ranges from 0.6 (all nodes co-located) to 1.0 (full longitude spread).

Occultation: Diversity is critical. Chords at different latitudes relative to the shadow path map the asteroid's cross-section. Longitude is less important (the event is instantaneous).

D_occ(N) = 0.3 + 0.7 × W(N)

Heavily penalises nodes clustered at the same latitude: three telescopes at the same latitude produce nearly the same chord.

TTV transit: Longitude diversity extends out-of-transit baseline coverage. Latitude is irrelevant.

D_ttv(N) = 0.5 + 0.5 × L(N)

Microlensing parallax: Requires nodes separated along the Earth-lens baseline. The relevant metric is pairwise separation in the projected direction. As a practical proxy, use mean pairwise distance:

D_micro(N) = 0.2 + 0.8 × min(1.0, P(N) / 0.5)

(Reaches 1.0 when mean pairwise separation exceeds 5000 km.)

M-dwarf flare / NEO / supernova / routine:

D_other(N) = 0.8 + 0.2 × L(N)

Slight preference for diversity but not a strong driver.

2.6 Reliability Factor R(N_assigned)

The aggregate probability that at least k_min nodes succeed. Each node has independent success probability rel_j.

For simplicity (and because rel_j ≈ 0.7-0.9 for most active nodes), we use the expected success count:

E_success = Σ_{j ∈ N_i} rel_j
R(N_i) = min(1.0, E_success / k_min_i)

This penalises assignments where the expected number of successful observations barely meets the minimum.

2.7 Complete Parameterisation Per Science Case

GRB afterglow (sc = grb):

V_grb = 10.0 × B_class × B_brightness × max(0.1, (10/max(Δt,1))^0.8)
        × (0.6 + 0.4 × L(N)) × S(|N|, 1, 5) × R(N)

Stellar occultation (sc = occultation):

V_occ = 9.0 × B_class × B_brightness × U_occ(Δt)
        × (0.3 + 0.7 × W(N)) × S(|N|, 3, 15) × R(N)

TTV transit (sc = ttv):

V_ttv = 7.0 × B_class × B_brightness × U_ttv(Δt_start)
        × (0.5 + 0.5 × L(N)) × S(|N|, 1, 8) × R(N)

Microlensing (sc = microlensing):

V_micro = 8.0 × B_class × B_brightness × max(0.3, 1.0 - Δt_days/30)
          × (0.2 + 0.8 × min(1.0, P(N)/0.5)) × S(|N|, 2, 6) × R(N)

M-dwarf flare (sc = flare):

V_flare = 6.0 × B_class × B_brightness × max(0.1, exp(-Δt/30))
          × (0.8 + 0.2 × L(N)) × S(|N|, 1, 4) × R(N)

3. Complexity Analysis and Algorithm Selection

3.1 NP-Hardness

Claim: The full optimisation problem (maximise Σ_i V(t_i, N_i) subject to constraints C1–C5) is NP-hard.

Proof sketch by reduction from Generalised Assignment Problem (GAP):

The Generalised Assignment Problem is: given n jobs and m machines, where assigning job i to machine j yields profit p_ij and consumes capacity c_ij of machine j (which has capacity C_j), find an assignment maximising total profit subject to capacity constraints. GAP is NP-hard.

Reduce GAP to our problem as follows: - Each GAP job becomes a target t_i with k_min_i = 1, k_sat_i = 1 (so only one node matters, and S(1,1,1) = 1). - Each GAP machine becomes a node n_j with capacity C_j = 1 (constraint C1 gives each node capacity 1). - Set all diversity factors D = 1, all urgency factors U = 1, all reliability factors R = 1. - Set the base priority B(t_i) such that the value of assigning exactly one node to t_i equals the GAP profit p_ij when the assigned node is n_j (this is achievable by encoding p_ij into a node-dependent brightness or filter compatibility factor). - The visibility constraint (C2) encodes which machine-job pairs are feasible (just as GAP has feasibility constraints).

Under this reduction, the OpenAstro objective Σ_i V(t_i, N_i) reduces to the GAP objective. Any solver for the OpenAstro problem would solve GAP. Since GAP is NP-hard, so is the OpenAstro problem.

The interaction terms (diversity D depends on the full set N_i, not just pairwise) make our problem strictly harder than standard assignment problems, since the objective is no longer decomposable into independent (target, node) contributions.

3.2 Problem Size Analysis

MVP scale (Phase 1): - Active targets: n ≈ 3–5 (1-2 transient alerts + 1-2 scheduled transits + 1 routine) - Active nodes: m ≈ 5–10 (early volunteer network) - Assignment possibilities per scheduling cycle: each of m nodes can be assigned to one of n targets or left idle. Naively: (n+1)^m possible assignments. - n=5, m=10: 6^10 = 60,466,176 (about 60 million) - With constraint pruning (visibility eliminates ~50-70% of pairs), effective search space is roughly 3^10 ≈ 59,000. This is easily enumerable in under 1 second.

At scale (Phase 3): - Active targets: n ≈ 20–50 - Active nodes: m ≈ 50–200 - Naive search space: 51^200 — astronomically large. - Even with pruning, the search space is far beyond exhaustive enumeration.

3.3 MVP: Exhaustive Search Feasibility

At MVP scale, exhaustive search IS feasible after constraint pruning.

For each node n_j, compute the set of feasible targets F_j = { t_i : C2, C4, C5 are satisfied }. Typical |F_j| ≈ 2-3 at any moment (most targets are below the horizon or too faint). Including the "idle" option, each node has |F_j| + 1 choices.

Effective search space = Π_j (|F_j| + 1). With |F_j| ≈ 3 and m = 10: 4^10 = 1,048,576. Evaluating V for each candidate assignment takes O(n) (compute diversity and saturation for each target). Total: ~5 million operations. Easily computable in <100ms in Python.

Recommendation for MVP: Implement the greedy scheduler (Section 4) as the primary algorithm. Optionally, run exhaustive search as a validation check in development, confirming the greedy solution is close to optimal on real problem instances.

3.4 At Scale: Algorithm Selection

For 50 targets and 200 nodes, we need an efficient approximation. Options considered:

Algorithm Pros Cons Verdict
Greedy (Section 4) O(nmmin(n,m)), simple, fast No optimality guarantee for non-submodular objectives Use for real-time
ILP (PuLP/scipy) Optimal for linearisable objectives Diversity factor D is nonlinear; linearisation is lossy; solver time unpredictable Use as offline benchmark
Simulated annealing Handles nonlinear objectives well Tuning parameters; nondeterministic; harder to reason about Unnecessary at our scale
Problem-specific heuristic Can exploit structure One-off; harder to maintain Not yet warranted

Decision: Use the greedy scheduler (Section 4) for production at all scales. It runs in O(n * m * min(n, m)) which is ~200 * 50 * 50 = 500,000 operations at full scale — trivially fast. Use ILP (via PuLP with CBC solver) as an offline benchmark to measure the greedy approximation gap on historical data.


4. The Greedy Scheduler

4.1 Marginal Value Function

The key to the greedy approach is computing the marginal value of assigning node n_j to target t_i, given the current partial assignment.

Let N_i^{cur} be the set of nodes currently assigned to target t_i. The marginal value of adding n_j to t_i is:

S(t_i, n_j) = V(t_i, N_i^{cur} ∪ {n_j}) - V(t_i, N_i^{cur})

This captures the incremental science gain from adding this specific node to this specific target, accounting for: - How the saturation factor changes (adding the 2nd node to a TTV target adds more than adding the 9th) - How the diversity factor changes (adding a node in South Africa to a set already containing European nodes adds more diversity than adding another European node) - How the reliability factor changes

4.2 Algorithm Pseudocode

function GREEDY_SCHEDULE(T, N):
    # Precompute feasibility
    for each (t_i, n_j):
        feasible[i][j] = check_visibility(t_i, n_j) AND check_filter(t_i, n_j)
                          AND check_aperture(t_i, n_j)

    # Initialise
    assignment = {}           # node_j -> target_i or None
    assigned_nodes = {}       # target_i -> set of assigned nodes
    for each t_i: assigned_nodes[t_i] = {}

    unassigned = set(N)       # all nodes initially unassigned

    # Greedy loop
    while unassigned is not empty:
        best_score = 0
        best_pair = None

        for each n_j in unassigned:
            for each t_i in T:
                if not feasible[i][j]:
                    continue

                # Compute marginal value
                N_cur = assigned_nodes[t_i]
                N_new = N_cur ∪ {n_j}
                marginal = V(t_i, N_new) - V(t_i, N_cur)

                if marginal > best_score:
                    best_score = marginal
                    best_pair = (t_i, n_j)

        if best_pair is None or best_score ≤ 0:
            break  # No beneficial assignment remains

        # Assign the best pair
        (t_i*, n_j*) = best_pair
        assignment[n_j*] = t_i*
        assigned_nodes[t_i*].add(n_j*)
        unassigned.remove(n_j*)

    # Post-processing: enforce minimum coverage (C3)
    for each t_i in T:
        if 0 < |assigned_nodes[t_i]| < k_min_i:
            if t_i.sc in {occultation, microlensing}:  # hard constraint
                # Unassign all nodes from this target; return them to pool
                for n_j in assigned_nodes[t_i]:
                    assignment[n_j] = None
                    unassigned.add(n_j)
                assigned_nodes[t_i] = {}

    # Second pass: reassign freed nodes
    # (Run the inner greedy loop again with updated assigned_nodes)
    # ... (same logic as above, limited to unassigned nodes)

    return assignment

4.3 Greedy Optimality Analysis

Submodularity check. The greedy algorithm achieves a (1 - 1/e) ≈ 0.632 approximation ratio for maximising monotone submodular functions subject to matroid constraints (Nemhauser, Wolsey, Fisher 1978). Our objective is the sum Σ_i V(t_i, N_i), and constraint C1 defines a partition matroid (each node assigned to at most one target).

For each individual target, V(t_i, N_i) is a function of the set N_i. Is it submodular in N_i? The saturation factor S is concave in |N_i|, which is submodular. The diversity factor D is also submodular: adding a node to a less diverse set increases diversity more than adding it to an already diverse set (diminishing marginal diversity).

However, the reliability factor R is linear in the node reliabilities, and the product V = B * U * D * S * R means V is a product of submodular functions (D, S) and a linear function (R). A product of submodular functions is NOT in general submodular.

Counterexample. Consider a target with k_min = 2. With 0 or 1 nodes, V = 0 (saturation factor is 0 when k < k_min). Adding the 2nd node causes V to jump from 0 to a positive value. This means the marginal value of the 2nd node is much higher than the 1st node, which violates submodularity (marginal values should decrease, not increase). The all-or-nothing k_min threshold introduces a step discontinuity that breaks submodularity.

Implication: The greedy algorithm does NOT have a provable (1 - 1/e) approximation ratio for our specific V function due to the k_min threshold.

Practical mitigation. Two approaches: 1. Relax k_min to a soft penalty in the greedy scoring. Instead of V = 0 when |N| < k_min, use a very small value (e.g., V = 0.01 * V_full), so the greedy algorithm is willing to start assigning nodes to targets that need multiple observers, trusting that subsequent iterations will assign more. 2. Two-phase greedy. Phase 1: for each target requiring k_min > 1, tentatively "reserve" k_min feasible nodes with the highest reliability. Phase 2: run the standard greedy over remaining nodes and reserved-but-adjustable nodes. This handles the threshold effect.

Empirical approximation ratio. On synthetic problem instances at MVP scale (5 targets, 10 nodes), exhaustive search can verify the greedy solution. In simulations (described in internal testing notes), the greedy solution is within 90-95% of optimal for typical OpenAstro problem structures. The gap is largest when multiple targets compete for the same nodes and one target has a high k_min threshold.

Worst-case approximation ratio. Without submodularity, the theoretical worst case is unbounded (greedy can be arbitrarily bad for general set functions). However, for our specific V function, the worst case is bounded by the k_min threshold effect. With the soft-penalty relaxation (approach 1), the effective approximation ratio is empirically > 0.85 for OpenAstro-scale problems.

4.4 Implementation Notes

The soft k_min relaxation. Replace the hard S factor with:

S_soft(k, k_min, k_sat) =
    epsilon × k / k_min                              if k < k_min  (epsilon = 0.05)
    ((k - k_min + 1) / (k_sat - k_min + 1))^0.5     if k_min ≤ k ≤ k_sat
    1.0                                               if k > k_sat

This ensures the greedy algorithm sees a small but nonzero marginal value for starting to assign nodes to high-k_min targets, preventing starvation.


5. ToO (Target of Opportunity) Interrupts

5.1 The Interrupt Problem

A GRB alert arrives at time t_ToO. Nodes n_j are currently executing observations of routine targets. Should we interrupt?

5.2 Interrupt Decision Rule

Interrupt node n_j (currently observing target t_cur) for a new ToO target t_ToO if:

V_marginal(t_ToO, n_j) > V_remaining(t_cur, n_j) + C_switch

where: - V_marginal(t_ToO, n_j) = marginal value of adding n_j to the ToO target - V_remaining(t_cur, n_j) = expected science value of completing the current observation - C_switch = switching cost (lost time due to slew + re-acquisition, typically 60-120 seconds)

5.3 V_remaining for Partial Observations

The remaining value of a partially complete observation depends on what fraction of the scientifically critical data has been captured.

TTV transit at fraction f complete (f ∈ [0, 1]):

A transit light curve has three scientifically distinct phases: 1. Pre-ingress baseline (first ~15% of window): needed for normalisation 2. Ingress (15-25% of window): timing of ingress = half the science value of the transit 3. Flat bottom (25-75%): constrains depth, but redundant if already captured 4. Egress (75-85%): timing of egress = the other half of the science value 5. Post-egress baseline (85-100%): needed for normalisation

V_remaining_ttv(f) =
    V_total × 1.0                  if f < 0.15   (nothing captured yet; full value remains)
    V_total × 0.85                 if 0.15 ≤ f < 0.25  (have baseline, not ingress)
    V_total × 0.45                 if 0.25 ≤ f < 0.60  (have ingress; half the timing science is done)
    V_total × 0.40                 if 0.60 ≤ f < 0.75  (have ingress + bottom; egress is the remaining prize)
    V_total × 0.15                 if 0.75 ≤ f < 0.90  (have ingress + egress; just need post-egress baseline)
    V_total × 0.05                 if f ≥ 0.90   (nearly complete; very little value remains)

At f = 0.60 (60% complete): V_remaining = 0.40 × V_total. The ingress has been captured (critical first timing point), the flat bottom provides depth. Egress has not been captured yet — it is the most valuable remaining phase. Interrupting here sacrifices the egress timing, which is worth approximately 40% of the total transit value.

At f = 0.90 (90% complete): V_remaining = 0.05 × V_total. Both ingress and egress have been captured. Only post-egress baseline remains, which is worth very little. Almost any ToO should interrupt at this point, since the remaining value is negligible.

Decision for the 60% case: A GRB with V_marginal = 10.0 and a TTV with V_total = 7.0 gives V_remaining = 0.40 × 7.0 = 2.8. Since 10.0 > 2.8 + C_switch, interrupt.

Decision for the 90% case: V_remaining = 0.05 × 7.0 = 0.35. Even a low-priority ToO should trigger an interrupt.

GRB afterglow partially observed:

V_remaining_grb(f, Δt) = V_total × (1 - f) × U_grb(Δt + exp_remaining / 60)

The remaining value accounts for both the unfinished fraction and the continuing decay of the afterglow.

Routine / supernova monitoring:

V_remaining_routine(f) = V_total × (1 - f)

Linear: no critical phases.

5.4 Notification Latency Budget

For a GRB afterglow, the optical flux decays as t^(-1.2). The science value of observations at time t relative to t=10 min is:

Time since GRB (min) Relative flux Relative science value
1 15.8x Extremely high but unrealistic to achieve
5 3.6x Possible if already on-sky
10 1.0x Baseline
20 0.44x Good
60 0.10x Marginal for amateurs
120 0.05x Requires large aperture

Latency budget for interrupt signal:

Total latency = T_alert + T_process + T_notify + T_decide + T_slew

T_alert:   GCN Kafka delivery               ~5-20 seconds
T_process: OpenAstro ingestion + scheduling  ~5-10 seconds
T_notify:  Server → node (heartbeat cycle)   ≤30 seconds (worst case)
T_decide:  Node decides to interrupt          ~1 second (automated)
T_slew:    Telescope slews to new target      30-120 seconds

Total: 71-181 seconds (best: ~1.2 min, worst: ~3 min)

Assessment: The total latency of 1-3 minutes means we can realistically begin GRB observations at Δt ≈ 5-8 minutes post-trigger (assuming the GCN notice itself arrives 2-5 minutes after the burst). This captures significant afterglow flux.

Optimisation: Replace the 30-second heartbeat polling with a WebSocket push channel for ToO interrupts. This reduces T_notify to <1 second, bringing total latency to ~40-150 seconds. Recommended for Phase 2.

Latency comparison with science timescales:

Event type Characteristic timescale Required latency Our latency Adequate?
GRB afterglow ~minutes (power-law decay) <5 min ideal 1-3 min Yes
Kilonova ~hours (rises over hours) <30 min 1-3 min Yes, easily
M-dwarf flare ~minutes (exponential decay) <5 min 1-3 min Marginal
Occultation seconds (predictable time) Pre-scheduled, not interrupt N/A Pre-assigned
Microlensing ~days <1 hour 1-3 min Yes, easily

6. Real-Time Performance

6.1 Greedy Scheduler Time Complexity

The greedy algorithm (Section 4.2) runs: - Outer loop: at most min(n, m) iterations (one node assigned per iteration; at most m nodes or until no target benefits) - Inner loop: for each iteration, evaluate all (target, node) pairs in the unassigned pool - Marginal value computation: O(|N_i|) for diversity calculation (pairwise distances)

Per-iteration cost: O(n × |unassigned|) marginal value evaluations, each taking O(|N_i|) for diversity. In the worst case |N_i| = m, so per-iteration cost is O(n × m × m). Over min(n,m) iterations:

Total: O(n × m^2 × min(n, m))

However, in practice |N_i| is small (at most k_sat, which is at most 15), so the diversity computation is O(k_sat^2) per evaluation, and the practical complexity is:

T_practical = O(n × m × min(n, m) × k_sat^2)

6.2 MVP Performance (n=5, m=10)

Iterations: min(5, 10) = 5
Per iteration: 5 targets × 10 nodes = 50 evaluations
Per evaluation: diversity with k_sat ≤ 15 → ~15^2 = 225 operations
Total: 5 × 50 × 225 = 56,250 operations

At ~10^8 Python operations per second (with NumPy for the heavy math): < 1 millisecond.

Even with pure Python: ~10^6 operations/second → ~56 ms. Well within the 60-second scheduling cycle.

6.3 Scale Performance (n=50, m=200)

Iterations: min(50, 200) = 50
Per iteration: 50 targets × 200 nodes = 10,000 evaluations (decreasing as nodes are assigned)
Per evaluation: ~225 operations (diversity)
Total: 50 × (average ~5,000 evaluations) × 225 = 56,250,000 operations

Pure Python: ~56 seconds. Too slow.

With NumPy vectorisation of the inner loop (compute all (target, node) marginal values as matrix operations): expected 100-500x speedup → 0.1-0.6 seconds. Acceptable.

Alternative: Precompute the pairwise distance matrix between all nodes (200×200 = 40,000 entries, done once per cycle in <1ms with NumPy). Cache diversity contributions. This reduces per-evaluation diversity cost from O(k^2) to O(1) lookup + O(k) incremental update.

With this optimisation: Total ≈ 50 × 5,000 × 15 = 3,750,000 operations → < 100 ms in NumPy. Comfortably within the 60-second cycle.

6.4 Database Queries

The scheduler requires the following data at the start of each cycle:

Query 1: Active targets

SELECT id, ra_deg, dec_deg, magnitude, science_case, priority_class,
       window_start, window_end, required_exposure, required_filters,
       k_min, k_sat, alert_time
FROM targets
WHERE active = true
  AND window_end > NOW()
ORDER BY priority_class, alert_time;

Index: CREATE INDEX idx_targets_active_window ON targets(active, window_end);

Expected result set: 5-50 rows. Execution time: <1 ms.

Query 2: Active nodes

SELECT id, latitude, longitude, aperture_mm, filter_list,
       sky_conditions, reliability_score, current_target_id,
       current_observation_start, current_observation_progress
FROM sites
WHERE status = 'online'
  AND last_heartbeat > NOW() - INTERVAL '2 minutes';

Index: CREATE INDEX idx_sites_status_heartbeat ON sites(status, last_heartbeat);

Expected result set: 10-200 rows. Execution time: <1 ms.

Query 3: Current assignments (for interrupt decisions)

SELECT s.id AS site_id, t.id AS target_id, t.science_case,
       o.start_time, o.expected_duration,
       EXTRACT(EPOCH FROM (NOW() - o.start_time)) / o.expected_duration AS progress
FROM observations o
JOIN sites s ON o.site_id = s.id
JOIN targets t ON o.target_id = t.id
WHERE o.status = 'in_progress';

Index: CREATE INDEX idx_observations_status ON observations(status) WHERE status = 'in_progress';

Expected result set: same as active nodes. Execution time: <1 ms.

Total database overhead: <5 ms for all three queries. The scheduler is CPU-bound, not IO-bound.

6.5 Scheduling Cycle Architecture

Every 60 seconds (or on ToO alert arrival):
    1. Fetch active targets (Query 1)             ~1 ms
    2. Fetch active nodes (Query 2)               ~1 ms
    3. Fetch current assignments (Query 3)         ~1 ms
    4. Compute visibility matrix (Astropy AltAz)  ~10-50 ms
    5. Run greedy scheduler                       ~1-100 ms
    6. Compute ToO interrupt decisions             ~1 ms
    7. Write new assignments to database           ~5 ms
    8. Push interrupt signals (if any)             ~1 ms
    ──────────────────────────────────────────────
    Total:                                        ~20-160 ms

The entire scheduling cycle completes in well under 1 second at all projected scales.


7. Coverage Diversity Scoring — Detailed Design

7.1 Occultation: Geographic Diversity Score

For stellar occultations, the asteroid's shadow sweeps across the Earth's surface. Each observing node samples a single chord across the asteroid's projected silhouette. The science goal is to reconstruct the asteroid's 2D shape, which requires chords at multiple different perpendicular offsets from the shadow centreline.

Formal definition:

Let the shadow path be defined by a centreline azimuth angle theta (the direction the shadow moves across the Earth). Nodes contribute useful chords when they are spread perpendicular to this direction.

For a set of nodes N = {n_1, ..., n_k} observing an occultation with shadow centreline azimuth theta:

D_occ(N, theta) = (range of cross-track offsets) / (estimated shadow width + buffer)

where the cross-track offset of node n_j is:

offset_j = (lat_j - lat_centre) × cos(theta) - (lon_j - lon_centre) × sin(theta) × cos(lat_centre)

(This is the signed perpendicular distance from the centreline in the local tangent plane, in degrees.)

D_occ(N, theta) = min(1.0, (max(offset) - min(offset)) / max_useful_spread)

where max_useful_spread is determined by the asteroid's estimated diameter (typically 10-100 km projected on Earth, corresponding to ~0.01-0.1 degrees of latitude).

Simplified form (when theta is unknown or for general scheduling):

Use the latitude spread as a proxy:

D_occ(N) = min(1.0, (lat_max - lat_min) / spread_target)

where spread_target = 5 degrees (~500 km) as a reasonable default for main-belt asteroid occultations.

7.2 Microlensing Parallax: Binary Feasibility Check

For microlensing parallax measurement, the key requirement is that at least two nodes are separated by a distance large enough to produce a measurable parallax signal. The required separation depends on the Einstein radius projected on Earth, which for typical microlensing events is 500-5000 km.

Binary feasibility check:

function MICROLENS_PARALLAX_FEASIBLE(N, baseline_direction, min_separation_km=500):
    for each pair (n_a, n_b) in N:
        d = great_circle_distance(n_a, n_b)  # in km
        if d ≥ min_separation_km:
            # Optional: check projection along baseline
            projected_d = d × |cos(angle between pair vector and baseline_direction)|
            if projected_d ≥ min_separation_km:
                return true
    return false

In practice, the baseline direction (Earth-lens direction projected on the sky) is often poorly constrained for ongoing events. So we use the simpler criterion: any pair separated by > 500 km qualifies.

Integration into D_micro:

D_micro(N) =
    0.2                                               if no pair separated by ≥ 500 km
    0.2 + 0.8 × min(1.0, max_pair_separation / 5000) otherwise

This ensures the scheduler strongly prefers geographically distant nodes for microlensing targets.

7.3 Effect on Node Selection

Example: occultation target visible from Europe and South America.

Available nodes: - n_1: London (51.5°N, 0.1°W) - n_2: Paris (48.9°N, 2.3°E) - n_3: Madrid (40.4°N, 3.7°W) - n_4: Buenos Aires (34.6°S, 58.4°W) - n_5: Santiago (33.4°S, 70.7°W)

Without diversity scoring, the scheduler might assign the 3 highest-reliability nodes, which could be {London, Paris, Madrid} — all at similar latitudes, producing nearly identical chords.

With D_occ: - {London, Paris, Madrid}: lat spread = 51.5 - 40.4 = 11.1° → D_occ = min(1.0, 11.1/5) = 1.0 - {London, Madrid, Buenos Aires}: lat spread = 51.5 - (-34.6) = 86.1° → D_occ = 1.0

Both hit the cap. But the marginal value of adding Buenos Aires vs. Paris (given London already assigned): - Adding Paris: lat spread goes from 0 to 2.6° → D_occ change: +0.52 - Adding Buenos Aires: lat spread goes from 0 to 86.1° → D_occ change: +1.0 (capped)

Buenos Aires is strongly preferred, even if its reliability is somewhat lower. This is the desired behaviour: geographic diversity overrides marginal reliability differences for occultation science.


8. Worked Example

8.1 Setup

Time: 2026-03-21 22:00 UTC. Scheduling cycle triggered.

Active targets:

Target Science case RA Dec Mag Priority Window k_min k_sat Alert age
t_1: WASP-4b transit ttv 353.6° -42.1° 12.5 high 22:20-01:40 UTC 1 8 Predicted (scheduled)
t_2: GRB 260321A grb 120.3° +15.7° ~16 (est) critical Now-indefinite 1 5 5 minutes ago
t_3: (3200) Phaethon routine 98.2° +22.4° 14.8 low All night 1 2 N/A

Active nodes:

Node Location Lat Lon Aperture Reliability Sky Current
n_1 Tenerife, Spain 28.3°N 16.5°W 250mm 0.90 0.85 idle
n_2 Provence, France 43.7°N 5.7°E 200mm 0.85 0.80 idle
n_3 Sao Paulo, Brazil 23.5°S 46.6°W 300mm 0.80 0.70 idle
n_4 Mendoza, Argentina 32.9°S 68.8°W 350mm 0.75 0.90 idle
n_5 Cape Town, S. Africa 33.9°S 18.5°E 280mm 0.85 0.75 idle
n_6 Melbourne, Australia 37.8°S 145.0°E 200mm 0.80 0.65 idle

8.2 Visibility Check

At 22:00 UTC, compute altitude for each (target, node) pair:

n_1 (Tenerife) n_2 (Provence) n_3 (Sao Paulo) n_4 (Mendoza) n_5 (Cape Town) n_6 (Melbourne)
t_1 WASP-4b (353.6, -42.1) Alt ~25° (marginal) Alt ~5° (below limit) Alt ~55° Alt ~60° Alt ~45° Alt ~10° (below limit)
t_2 GRB (120.3, +15.7) Alt ~-15° (set) Alt ~-10° (set) Alt ~30° Alt ~20° (marginal) Alt ~-5° (set) Alt ~65°
t_3 Phaethon (98.2, +22.4) Alt ~-20° (set) Alt ~-15° (set) Alt ~25° (marginal) Alt ~15° (below limit) Alt ~-10° (set) Alt ~60°

Feasibility matrix (1 = feasible, 0 = not feasible, using 30° altitude cutoff unless science case warrants lower):

n_1 n_2 n_3 n_4 n_5 n_6
t_1 (TTV) 0 0 1 1 1 0
t_2 (GRB) 0 0 1 0 0 1
t_3 (routine) 0 0 0 0 0 1

Note: For GRBs, we lower the altitude limit to 20° due to extreme urgency. Rechecking: n_4 at 20° for the GRB is marginal; we accept it.

Revised feasibility with relaxed GRB airmass:

n_1 n_2 n_3 n_4 n_5 n_6
t_1 (TTV) 0 0 1 1 1 0
t_2 (GRB) 0 0 1 1 0 1
t_3 (routine) 0 0 0 0 0 1

Nodes n_1 and n_2 (Europe) cannot see any current target at useful altitude — they will remain idle this cycle.

8.3 Compute Science Value Components

Target t_1: WASP-4b transit (TTV)

B = B_science(ttv) × B_class(high) × B_brightness(12.5) = 7.0 × 1.5 × 1.0 = 10.5
U = U_ttv: transit starts in 20 min, currently in pre-transit window → U = 0.8
k_min = 1, k_sat = 8

Target t_2: GRB 260321A

B = B_science(grb) × B_class(critical) × B_brightness(16) = 10.0 × 2.0 × 0.8 = 16.0
U = U_grb(Δt = 5 min) = max(0.1, (10/5)^0.8) = max(0.1, 1.74) = 1.74
k_min = 1, k_sat = 5

Target t_3: Phaethon (routine astrometry)

B = B_science(routine) × B_class(low) × B_brightness(14.8) = 1.0 × 0.5 × 1.0 = 0.5
U = U_routine = 1.0
k_min = 1, k_sat = 2

8.4 Greedy Assignment Sequence

Iteration 1: Evaluate all feasible (target, node) pairs with empty initial assignments.

For each pair, V = B × U × D × S_soft × R, with N_i = {} initially, so we compute V(t_i, {n_j}) for each:

t_1, n_3 (Sao Paulo):

S_soft(1, 1, 8) = ((1-1+1)/(8-1+1))^0.5 = (1/8)^0.5 = 0.354
D_ttv({n_3}) = 0.5 + 0.5 × 0 = 0.5  (single node, L=0)
R({n_3}) = min(1.0, 0.80/1) = 0.80
V = 10.5 × 0.8 × 0.5 × 0.354 × 0.80 = 1.189

t_1, n_4 (Mendoza):

S_soft = 0.354, D = 0.5, R = min(1.0, 0.75/1) = 0.75
V = 10.5 × 0.8 × 0.5 × 0.354 × 0.75 = 1.115

t_1, n_5 (Cape Town):

S_soft = 0.354, D = 0.5, R = min(1.0, 0.85/1) = 0.85
V = 10.5 × 0.8 × 0.5 × 0.354 × 0.85 = 1.264

t_2, n_3 (Sao Paulo):

S_soft(1, 1, 5) = ((1-1+1)/(5-1+1))^0.5 = (1/5)^0.5 = 0.447
D_grb({n_3}) = 0.6 + 0.4 × 0 = 0.6
R({n_3}) = 0.80
V = 16.0 × 1.74 × 0.6 × 0.447 × 0.80 = 5.972

t_2, n_4 (Mendoza):

S_soft = 0.447, D = 0.6, R = 0.75
V = 16.0 × 1.74 × 0.6 × 0.447 × 0.75 = 5.599

t_2, n_6 (Melbourne):

S_soft = 0.447, D = 0.6, R = 0.80
V = 16.0 × 1.74 × 0.6 × 0.447 × 0.80 = 5.972

t_3, n_6 (Melbourne):

S_soft(1, 1, 2) = (1/2)^0.5 = 0.707
D_other({n_6}) = 0.8 + 0.2 × 0 = 0.8
R = 0.80
V = 0.5 × 1.0 × 0.8 × 0.707 × 0.80 = 0.226

Ranking:

Pair V (marginal)
(t_2, n_3) 5.972
(t_2, n_6) 5.972
(t_2, n_4) 5.599
(t_1, n_5) 1.264
(t_1, n_3) 1.189
(t_1, n_4) 1.115
(t_3, n_6) 0.226

Assignment 1: (t_2, n_3) — assign Sao Paulo to GRB. Score: 5.972.


Iteration 2: n_3 is assigned to t_2. Recompute marginals for remaining pairs.

t_2, n_6 (Melbourne), given N_2 = {n_3}:

|N_2| = 2 now. S_soft(2, 1, 5) = ((2-1+1)/(5-1+1))^0.5 = (2/5)^0.5 = 0.632
Previously S was 0.447 with |N| = 1.
Marginal_S contribution via full V computation:

L({n_3, n_6}) = angular span of {-46.6°, 145.0°} = 360 - (145.0 - (-46.6)) = 360 - 191.6 = 168.4°?
Actually: lon_3 = -46.6, lon_6 = 145.0. Span = 145.0 - (-46.6) = 191.6°. Or going the other way: 360 - 191.6 = 168.4°. Smallest arc = 168.4°.
L = 168.4 / 360 = 0.468
D_grb({n_3, n_6}) = 0.6 + 0.4 × 0.468 = 0.787

V(t_2, {n_3, n_6}) = 16.0 × 1.74 × 0.787 × 0.632 × min(1.0, (0.80+0.80)/1)
                    = 16.0 × 1.74 × 0.787 × 0.632 × 1.0
                    = 13.840

V(t_2, {n_3}) = 16.0 × 1.74 × 0.6 × 0.447 × 0.80 = 5.972

Marginal = 13.840 - 5.972 = 7.868

t_2, n_4 (Mendoza), given N_2 = {n_3}:

lon_3 = -46.6, lon_4 = -68.8. Span = |-46.6 - (-68.8)| = 22.2°.
L = 22.2/360 = 0.062
D_grb = 0.6 + 0.4 × 0.062 = 0.625

S_soft(2, 1, 5) = 0.632
R = min(1.0, (0.80 + 0.75)/1) = 1.0

V(t_2, {n_3, n_4}) = 16.0 × 1.74 × 0.625 × 0.632 × 1.0 = 10.992
Marginal = 10.992 - 5.972 = 5.020

t_1, n_5 (Cape Town), given N_1 = {}:

V = 1.264 (unchanged)
Marginal = 1.264

t_1, n_4 (Mendoza), given N_1 = {}:

V = 1.115 (unchanged)
Marginal = 1.115

t_3, n_6 (Melbourne), given N_3 = {}:

V = 0.226 (unchanged)
Marginal = 0.226

Ranking:

Pair Marginal V
(t_2, n_6) 7.868
(t_2, n_4) 5.020
(t_1, n_5) 1.264
(t_1, n_4) 1.115
(t_3, n_6) 0.226

Assignment 2: (t_2, n_6) — assign Melbourne to GRB. Score: 7.868.

Note the diversity bonus: Melbourne (lon 145°E) is far from Sao Paulo (lon 46.6°W), producing a large longitude spread. The scheduler prefers Melbourne over Mendoza despite equal reliability because Melbourne adds much more geographic diversity.


Iteration 3: N_2 = {n_3, n_6}. Remaining unassigned: {n_1, n_2, n_4, n_5}. But n_1 and n_2 have no feasible targets.

t_2, n_4 (Mendoza), given N_2 = {n_3, n_6}:

Lons: {-46.6, 145.0, -68.8}. Sorted: -68.8, -46.6, 145.0.
Spans between consecutive (wrapping): -46.6-(-68.8)=22.2, 145.0-(-46.6)=191.6, -68.8+360-145.0=146.2.
Largest gap = 191.6. Span covered = 360 - 191.6 = 168.4° (same as before; Mendoza is between the two existing nodes in longitude).
L = 168.4/360 = 0.468 (unchanged — Mendoza doesn't improve diversity!)
D_grb = 0.6 + 0.4 × 0.468 = 0.787

S_soft(3, 1, 5) = ((3)/(5))^0.5 = 0.775
R = min(1.0, (0.80+0.80+0.75)/1) = 1.0

V(t_2, {n_3, n_6, n_4}) = 16.0 × 1.74 × 0.787 × 0.775 × 1.0 = 16.972
V(t_2, {n_3, n_6}) = 13.840

Marginal = 16.972 - 13.840 = 3.132

t_1, n_4 (Mendoza), given N_1 = {}:

Marginal = 1.115

t_1, n_5 (Cape Town), given N_1 = {}:

Marginal = 1.264

Ranking:

Pair Marginal V
(t_2, n_4) 3.132
(t_1, n_5) 1.264
(t_1, n_4) 1.115

Assignment 3: (t_2, n_4) — assign Mendoza to GRB. Score: 3.132.


Iteration 4: N_2 = {n_3, n_6, n_4}. Remaining feasible: n_5 can observe t_1.

t_1, n_5 (Cape Town), given N_1 = {}:

Marginal = 1.264

No competition. Assignment 4: (t_1, n_5) — assign Cape Town to WASP-4b transit. Score: 1.264.


Iteration 5: All remaining nodes (n_1, n_2) have no feasible targets. Stop.

8.5 Final Schedule

Node Location Assigned target Science case
n_1 Tenerife, Spain IDLE —
n_2 Provence, France IDLE —
n_3 Sao Paulo, Brazil t_2: GRB 260321A GRB afterglow photometry
n_4 Mendoza, Argentina t_2: GRB 260321A GRB afterglow photometry
n_5 Cape Town, S. Africa t_1: WASP-4b transit TTV transit timing
n_6 Melbourne, Australia t_2: GRB 260321A GRB afterglow photometry

Total science value:

V(t_1) = 1.264 (single node, Cape Town)
V(t_2) = 16.972 (three nodes: Sao Paulo, Mendoza, Melbourne)
V(t_3) = 0 (no nodes assigned — only Melbourne could observe, and it was more valuable on the GRB)

Total = 18.236

8.6 Analysis of the Schedule

  1. The GRB dominates. With a critical priority class, high urgency (5 minutes old), and a base science value of 16.0, the GRB absorbs most of the network. This is correct: GRB afterglows are the highest-value transients, and the urgency factor of 1.74 amplifies this further.

  2. Diversity drove Melbourne to the GRB. Melbourne (lon 145°E) was preferred over a second South American node because it increased the longitude spread from 22° to 168°. This gives the GRB multi-longitude light curve coverage extending across 11+ hours of Earth rotation — critical for tracking the afterglow decay.

  3. Cape Town was assigned to the TTV transit rather than the GRB. Cape Town cannot see the GRB (altitude < 0°), so it goes to its only feasible target.

  4. Phaethon (routine) got nothing. The only node that could observe it (Melbourne) was more valuable on the GRB. Routine targets are correctly deprioritised.

  5. European nodes are idle. At 22:00 UTC, the targets visible are in the southern sky (WASP-4b at dec -42°) or setting in the east (GRB at RA 120°). European nodes will pick up targets in later scheduling cycles as objects rise.

  6. The TTV transit has only 1 observer (k_min = 1 is satisfied). If the transit had k_min = 2, Cape Town alone would fail the coverage constraint, and it would be reassigned to some lower-value task or left idle. This highlights the importance of network growth: at 6 nodes with this target distribution, the network is stretched thin.


Appendix A: Summary of Notation

Symbol Meaning
T = Set of active targets
N = Set of active nodes
A[i,j] Assignment matrix: 1 if node j assigned to target i
N_i Set of nodes assigned to target i
V(t_i, N_i) Total science value of target i with assigned nodes
S(t_i, n_j) Marginal value of assigning node j to target i
B Base priority factor
U Urgency (time decay) factor
D Geographic diversity factor
S Saturation (diminishing returns) factor
R Reliability factor
k_min Minimum nodes required for science validity
k_sat Node count at which value saturates
L(N) Longitude spread of a node set
W(N) Latitude spread of a node set
P(N) Mean pairwise great-circle distance of a node set

Appendix B: Key Design Decisions

  1. Greedy over ILP. ILP is optimal for linear objectives but our diversity factor makes the objective nonlinear. The greedy scheduler is fast, simple, and empirically near-optimal for our problem sizes. ILP is reserved for offline benchmarking.

  2. Soft k_min over hard k_min in greedy. Hard k_min thresholds break submodularity and cause the greedy algorithm to starve high-k_min targets. The epsilon-relaxed soft threshold (Section 4.4) fixes this without sacrificing the final constraint enforcement.

  3. Product-form V function. The multiplicative decomposition V = B * U * D * S * R is chosen for interpretability: each factor is independently meaningful, unit-free, and tunable. The alternative (additive weights) would require careful normalisation and cross-term calibration.

  4. 60-second scheduling cycle. Matches the existing client polling interval. ToO interrupts bypass this cycle and trigger immediate rescheduling.

  5. Urgency factors are science-driven. Each decay function (power law for GRBs, step function for occultations, exponential for flares) matches the actual physical timescale of the phenomenon.


Last updated: 2026-03-21