Sourced from:
Telescope auction bidding system design - Claude.md
Telescope Capability Vectors¶
This document describes a deterministic, vector-math-based approach to matching observation tasks to telescopes. There is no ML training required. All parameters are derived from published astronomy literature. The system is fully transparent and explainable β you can always show exactly why a particular telescope was assigned a task.
This is an alternative to the greedy priority scorer in Handover.md. The greedy scorer is fine for MVP. This approach is better when the network has heterogeneous equipment and the scheduler needs to reason explicitly about capability fit.
Core Concept¶
Each telescope is described as a 7-dimensional normalised capability vector. Each observation task is described as a 7-dimensional requirement vector. The scheduler computes the weighted cosine similarity between all available telescopes and the task requirement, and assigns the task to the highest-scoring telescope.
Zero ML. Zero training. Configuration + math.
The 7 Dimensions¶
| Index | Dimension | Unit | Notes |
|---|---|---|---|
| 0 | Aperture | mm | Range: 100 (amateur) to 10,000 (VLT class) |
| 1 | Wavelength coverage | proportion of spectrum | 0β1 fraction of UVβFar IR covered |
| 2 | Resolution | arcsec/pixel | Inverse: lower is better; range 0.3β5.0 |
| 3 | Location quality | composite score | Bortle scale + seeing + clear nights |
| 4 | Instrument versatility | proportion | Fraction of standard instrument types available |
| 5 | Mount precision | 0β1 accuracy | 1.0 = perfect tracking |
| 6 | Field of view | degrees | Range: 0.1 (planetary) to 10 (wide-field survey) |
Normalisation: min-max scaling to 0β1. Dimension 2 (resolution) is inverted (better resolution = higher score).
Task Category Weights¶
Weights are set manually from astronomy literature β not learned. Each weight multiplies the corresponding dimension's contribution to the cosine similarity score.
| Task Category | Aperture | Wavelength | Resolution | Location | Instruments | Mount | FoV |
|---|---|---|---|---|---|---|---|
| Deep sky imaging | 2.5 | 1.0 | 0.5 | 1.5 | 0.3 | 1.0 | 0.8 |
| Planetary | 1.2 | 1.0 | 2.5 | 0.8 | 0.5 | 2.0 | 0.3 |
| Exoplanet transit | 1.0 | 0.8 | 2.5 | 2.0 | 1.8 | 2.5 | 0.3 |
| Spectroscopy | 1.5 | 2.0 | 1.0 | 1.2 | 2.5 | 1.5 | 0.5 |
| Wide-field survey | 0.8 | 1.0 | 0.5 | 1.0 | 0.5 | 1.2 | 2.5 |
| Time-domain transients | 1.0 | 1.0 | 1.0 | 2.5 | 1.0 | 1.5 | 1.0 |
Matching Workflow¶
1. Telescope registers β raw specs (aperture_mm, seeing_arcsec, etc.)
β normalise to 7D vector β store in database
2. Task submitted β requirement specs + category
β normalise to 7D vector
β look up weights for category
3. For each available telescope:
β weighted_cosine_similarity(telescope_vec, task_vec, weights)
β score 0β1
4. Assign task to highest-scoring telescope that passes hard constraints
(visibility, filter availability, timing window)
Transparency / Explainability¶
The system can always produce a per-dimension breakdown showing exactly how the score was calculated:
Dimension | Telescope | Task | Weight | Contribution
aperture | 0.192 | 0.141 | 2.5 | 0.068
wavelength_coverage| 0.212 | 0.040 | 1.0 | 0.008
resolution | 0.894 | 0.851 | 0.5 | 0.380
location_quality | 0.717 | 0.500 | 1.5 | 0.538
instrument_versat. | 0.333 | 0.167 | 0.3 | 0.017
mount_precision | 0.950 | 0.900 | 1.0 | 0.855
field_of_view | 0.242 | 0.141 | 0.8 | 0.027
-----------------------------------------------------------
Total score: 0.823
Reliability Weighting¶
A dynamic ReliabilityScore (0β1) should be multiplied into the final score. This score is computed from historical performance:
- Fraction of tasks completed successfully
- Mean data quality (FWHM, SNR) vs predicted
- Mean uptime over rolling 30-day window
A telescope in a dry climate with consistent clear nights and a skilled operator should naturally score higher than a sporadic urban site with cloudy conditions.
K-means clustering application: Run K-means on the telescopes themselves using their data quality metrics as features. This automatically identifies "high-quality" vs "suspect" clusters without manual labelling, and can flag nodes that need calibration attention.
References¶
- Dimension ranges derived from: Sky & Telescope buyer's guide (amateur apertures), ESO instrument documentation (professional apertures), Racine 1984 PASP 96:417 (seeing), Bortle 2001 Sky & Telescope (sky quality), AAVSO/ETD exoplanet observation protocols (task weights).
- Cosine similarity: standard linear algebra. No exotic dependencies.
- K-means clustering: scikit-learn
KMeans.