MVP Server: Solo Developer Implementation Guide¶
You're alone. That's fine. Here's exactly what to build, in what order, with what tools.
PHASE 1: WEEK 1-2 (Core Server)¶
Technology Stack¶
Server: Python 3.11+ with FastAPI
Database: SQLite (yes, really - switch to PostgreSQL later)
Hosting: Your home machine or $5/mo VPS (Hetzner, DigitalOcean)
Day 1-2: Database Schema¶
# models.py - This is all you need to start
from sqlalchemy import Column, Integer, Float, String, DateTime, Boolean, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Site(Base):
__tablename__ = 'sites'
id = Column(Integer, primary_key=True)
name = Column(String(100))
latitude = Column(Float) # decimal degrees
longitude = Column(Float)
altitude = Column(Float) # meters
aperture_mm = Column(Integer)
api_key = Column(String(64)) # UUID for authentication
is_active = Column(Boolean, default=True)
last_seen = Column(DateTime)
class Target(Base):
__tablename__ = 'targets'
id = Column(Integer, primary_key=True)
name = Column(String(100))
ra = Column(Float) # decimal degrees
dec = Column(Float) # decimal degrees
target_type = Column(String(50)) # 'exoplanet', 'variable', 'occultation', etc
priority = Column(Integer, default=50) # 1-100, higher = more important
min_observations = Column(Integer, default=1)
cadence_hours = Column(Float, nullable=True) # for periodic targets
active_until = Column(DateTime, nullable=True)
class Observation(Base):
__tablename__ = 'observations'
id = Column(Integer, primary_key=True)
site_id = Column(Integer, ForeignKey('sites.id'))
target_id = Column(Integer, ForeignKey('targets.id'))
timestamp = Column(DateTime) # UTC, always
filter = Column(String(10)) # 'V', 'R', 'I', 'Clear', etc
magnitude = Column(Float, nullable=True)
mag_error = Column(Float, nullable=True)
exposure_sec = Column(Float)
airmass = Column(Float, nullable=True)
quality_flag = Column(Integer, default=0) # 0=good, 1=warning, 2=bad
notes = Column(String(500), nullable=True)
Day 3-4: Core API Endpoints¶
# main.py
from fastapi import FastAPI, HTTPException, Depends, Header
from datetime import datetime, timezone
import secrets
app = FastAPI(title="Telescope Network API", version="0.1.0")
# Authentication - simple API key check
def verify_site(x_api_key: str = Header()):
site = db.query(Site).filter(Site.api_key == x_api_key).first()
if not site:
raise HTTPException(status_code=401, detail="Invalid API key")
site.last_seen = datetime.now(timezone.utc)
db.commit()
return site
# ENDPOINT 1: Get targets for a site
@app.get("/targets")
def get_targets(site: Site = Depends(verify_site)):
"""
Returns prioritized list of observable targets for this site.
Client calls this every ~60 seconds.
"""
targets = calculate_observable_targets(site)
return {"targets": targets, "timestamp": datetime.now(timezone.utc)}
# ENDPOINT 2: Submit an observation
@app.post("/observations")
def submit_observation(obs: ObservationCreate, site: Site = Depends(verify_site)):
"""
Client submits completed observation.
"""
new_obs = Observation(
site_id=site.id,
target_id=obs.target_id,
timestamp=obs.timestamp,
filter=obs.filter,
magnitude=obs.magnitude,
mag_error=obs.mag_error,
exposure_sec=obs.exposure_sec,
airmass=obs.airmass
)
db.add(new_obs)
db.commit()
return {"status": "accepted", "observation_id": new_obs.id}
# ENDPOINT 3: Site registration
@app.post("/sites/register")
def register_site(site_info: SiteCreate):
"""
New site registers with network.
Returns API key (store this securely!).
"""
api_key = secrets.token_hex(32)
new_site = Site(
name=site_info.name,
latitude=site_info.latitude,
longitude=site_info.longitude,
altitude=site_info.altitude,
aperture_mm=site_info.aperture_mm,
api_key=api_key
)
db.add(new_site)
db.commit()
return {"api_key": api_key, "site_id": new_site.id}
# ENDPOINT 4: Network status (public)
@app.get("/status")
def network_status():
"""
Public endpoint showing network health.
"""
active_sites = db.query(Site).filter(Site.is_active == True).count()
total_obs = db.query(Observation).count()
return {
"active_sites": active_sites,
"total_observations": total_obs,
"uptime": "placeholder"
}
Day 5-7: The Scheduler (This Is The Brain)¶
# scheduler.py
from astropy.coordinates import SkyCoord, EarthLocation, AltAz
from astropy.time import Time
import astropy.units as u
def calculate_observable_targets(site: Site, now: datetime = None):
"""
Core scheduling algorithm. Keep it simple.
"""
if now is None:
now = datetime.now(timezone.utc)
# Site location
location = EarthLocation(
lat=site.latitude * u.deg,
lon=site.longitude * u.deg,
height=site.altitude * u.m
)
# Current time
obs_time = Time(now)
# Get all active targets
targets = db.query(Target).filter(Target.active_until > now).all()
scored_targets = []
for target in targets:
# Calculate altitude
coord = SkyCoord(ra=target.ra*u.deg, dec=target.dec*u.deg)
altaz = coord.transform_to(AltAz(obstime=obs_time, location=location))
altitude = altaz.alt.deg
# Skip if below horizon or too low
if altitude < 30: # Minimum altitude
continue
# Calculate score
score = target.priority # Base score
# Bonus for optimal altitude (60-80 degrees)
if 60 <= altitude <= 80:
score += 20
elif altitude > 80: # Near zenith, might have tracking issues
score += 10
# Penalty if recently observed from this site
recent_obs = db.query(Observation).filter(
Observation.target_id == target.id,
Observation.site_id == site.id,
Observation.timestamp > now - timedelta(hours=2)
).count()
if recent_obs > 0:
score -= 30
# Bonus for coverage deficit
total_obs = db.query(Observation).filter(
Observation.target_id == target.id
).count()
if total_obs < target.min_observations:
score += 25
scored_targets.append({
"target_id": target.id,
"name": target.name,
"ra": target.ra,
"dec": target.dec,
"score": score,
"altitude": round(altitude, 1),
"target_type": target.target_type
})
# Sort by score, return top 10
scored_targets.sort(key=lambda x: x["score"], reverse=True)
return scored_targets[:10]
PHASE 2: WEEK 3-4 (Basic Client)¶
Minimal Client (runs at each telescope)¶
# client.py - Runs at each site
import requests
import time
import yaml
from datetime import datetime
# Load config
with open('config.yaml', 'r') as f:
config = yaml.safe_load(f)
SERVER_URL = config['server_url']
API_KEY = config['api_key']
POLL_INTERVAL = 60 # seconds
headers = {"X-API-Key": API_KEY}
def main_loop():
while True:
try:
# Get target list
response = requests.get(f"{SERVER_URL}/targets", headers=headers)
targets = response.json()['targets']
if targets:
best_target = targets[0]
print(f"Best target: {best_target['name']} (score: {best_target['score']})")
# TODO: Actually command your telescope here
# slew_to(best_target['ra'], best_target['dec'])
# image = take_exposure(60) # 60 second exposure
# magnitude = extract_photometry(image, best_target)
# For now, just log it
print(f"Would observe {best_target['name']} at alt {best_target['altitude']}°")
time.sleep(POLL_INTERVAL)
except Exception as e:
print(f"Error: {e}")
time.sleep(POLL_INTERVAL)
if __name__ == "__main__":
main_loop()
Config File Template¶
# config.yaml
server_url: "http://your-server.com:8000"
api_key: "your-64-character-api-key-here"
site:
name: "My Observatory"
latitude: 34.0522
longitude: -118.2437
altitude: 100
aperture_mm: 200
telescope:
driver: "ASCOM" # or "INDI"
device_name: "Telescope Simulator"
camera:
driver: "ASCOM"
device_name: "CCD Simulator"
default_exposure: 60
filters:
available: ["V", "R", "I", "Clear"]
default: "V"
PHASE 3: WEEK 5-6 (Make It Real)¶
Add Alert Ingestion¶
# alerts.py
import requests
from datetime import datetime, timedelta
def ingest_gaia_alerts():
"""
Pull recent transients from Gaia Alerts
"""
url = "http://gsaweb.ast.cam.ac.uk/alerts/alertsindex"
# Parse the page, extract new transients
# Add them as targets with high priority
pass
def ingest_tns():
"""
Pull from Transient Name Server
"""
# https://www.wis-tns.org/api/get
pass
def check_occultation_predictions():
"""
Check Steve Preston's occultation predictions
or Lucky Star predictions
"""
# Add upcoming occultations as time-limited high-priority targets
pass
# Run ingestion periodically
def scheduled_ingestion():
ingest_gaia_alerts()
ingest_tns()
check_occultation_predictions()
Add Simple Web Dashboard¶
# dashboard.py - Using FastAPI's built-in templates
from fastapi import Request
from fastapi.templating import Jinja2Templates
from fastapi.staticfiles import StaticFiles
templates = Jinja2Templates(directory="templates")
@app.get("/dashboard")
def dashboard(request: Request):
sites = db.query(Site).all()
recent_obs = db.query(Observation).order_by(
Observation.timestamp.desc()
).limit(50).all()
return templates.TemplateResponse("dashboard.html", {
"request": request,
"sites": sites,
"recent_observations": recent_obs
})
DEPLOYMENT CHECKLIST¶
Server Setup (VPS)¶
# On your VPS
apt update && apt install python3 python3-pip nginx certbot
# Create project
mkdir /opt/telescope-network
cd /opt/telescope-network
python3 -m venv venv
source venv/bin/activate
pip install fastapi uvicorn sqlalchemy astropy requests
# Run with systemd
cat > /etc/systemd/system/telescope-network.service << EOF
[Unit]
Description=Telescope Network API
After=network.target
[Service]
User=www-data
WorkingDirectory=/opt/telescope-network
ExecStart=/opt/telescope-network/venv/bin/uvicorn main:app --host 0.0.0.0 --port 8000
Restart=always
[Install]
WantedBy=multi-user.target
EOF
systemctl enable telescope-network
systemctl start telescope-network
Minimum Viable Features for Launch¶
- [ ] Site registration endpoint
- [ ] Target list endpoint with altitude filtering
- [ ] Observation submission endpoint
- [ ] Basic priority scoring
- [ ] Simple web status page
- [ ] One working client at your own telescope
What You DON'T Need for MVP¶
- ❌ User accounts / login system
- ❌ Pretty frontend
- ❌ Real-time websockets
- ❌ Machine learning
- ❌ Automated photometry pipeline
- ❌ Perfect scheduling algorithm
RECRUITING YOUR FIRST SITES¶
Script for Reaching Out¶
Subject: Invitation to join distributed telescope network - real science, minimal commitment
Hi [Name],
I'm building a distributed telescope network for time-domain astronomy
that focuses on science cases where geographic distribution matters:
- Stellar occultations (TNO shapes, atmospheres)
- Exoplanet transit timing variations
- Rapid transient follow-up
Unlike single-observatory science, these require simultaneous observations
from multiple sites - something no professional telescope can replicate.
Requirements:
- Go-To mount
- CCD or CMOS camera
- Clear skies occasionally
Commitment: Run our client software when you observe. It suggests targets
and uploads your photometry. You keep full access to all your data.
Recognition: Co-authorship on publications using your data.
Interested? Reply and I'll send setup instructions.
[Your name]
Where to Post¶
- Cloudy Nights forum (Time-Sensitive Astronomy subforum)
- r/astrophotography, r/telescopes
- AAVSO discussion groups
- BAA (British Astronomical Association) forums
- Regional astronomy club mailing lists
- Twitter/X astronomy community
TIMELINE SUMMARY¶
| Week | Deliverable |
|---|---|
| 1-2 | Database + API endpoints working locally |
| 3-4 | Basic client, test with your own telescope |
| 5-6 | Deploy to VPS, add web dashboard |
| 7-8 | Recruit 3-5 beta sites |
| 9-12 | Run first coordinated campaign |
COST ESTIMATE (YEAR 1)¶
| Item | Cost |
|---|---|
| VPS (Hetzner CX21) | $66/year |
| Domain name | $12/year |
| SSL cert | Free (Let's Encrypt) |
| Your time | Priceless |
| Total | ~$80/year |
WHAT SUCCESS LOOKS LIKE¶
Month 3: 10 sites registered, first light curve published to collaboration
Month 6: First successful multi-chord occultation with 3+ sites
Month 9: First paper draft: "Network Architecture for Distributed Time-Domain Astronomy"
Month 12: 50 sites, systematic occultation/TTV campaigns running
You don't need permission. You don't need funding. You need to write the code and start recruiting. The papers prove the science case. Now execute.