Skip to content

Volunteer Compute Options

This note covers the full landscape of distributed compute options for OpenAstro — self-hosted BOINC, alternatives to running your own BOINC server, cloud options, and the community experience question. Companion to [[BOINC Integration Plan]].


1. The Option Space

Option A:  Self-hosted BOINC server          ← the "full control" path
Option B:  Existing BOINC project partnership ← the "leverage existing community" path
Option C:  Science United / BOINC Alpha      ← the "low-infrastructure" BOINC path
Option D:  HTCondor on volunteer machines    ← the "academic grid" path
Option E:  Globus / XSEDE / ACCESS          ← the "apply for academic allocations" path
Option F:  Cloud spot instances (AWS/GCP)   ← the "just pay for it" path
Option G:  GitHub Actions / CI abuse        ← the "hacky but real" path
Option H:  Nothing; run on your VPS         ← the "keep it simple" path

Each option has a radically different cost/complexity/latency profile. Let's go through them honestly.


2. Option A: Self-Hosted BOINC

Covered in depth in [[BOINC Integration Plan]]. Summary for comparison:

Setup cost: 1–2 weeks of developer time. Non-trivial but doable. Ongoing cost: ~€5–9/month for the server. Low. Compute: Dependent on volunteer community size. 0 initially; potentially significant by Stage 2+. Latency: High. Jobs run when volunteers pick them up. No SLA. A work unit might wait hours before being picked up. Best for: Batch jobs with multi-hour runtime, no deadline. TTV MCMC is the ideal fit. Not for: Anything time-sensitive. Transient follow-up classification that needs a result in 10 minutes. Unique advantage: The BOINC brand. Amateur astronomers know BOINC. "Join the OpenAstro BOINC project" is a natural call to action.


3. Option B: Partnership with an Existing BOINC Project

Rather than running your own server, you approach an existing BOINC project and ask them to run your science application. This is done more than most people realize.

Projects That Have Accepted Outside Science

Asteroids@home (boinc.radar.au.dk) — Run by the Astronomical Institute of the Czech Academy of Sciences. They accept new asteroid light curves for shape inversion. They might accept OpenAstro's period-search and TTV jobs if the science case is presented well. Contact: Josef Durech (durech@sirrah.troja.mff.cuni.cz).

MilkyWay@home — Specializes in N-body stellar dynamics simulations. If OpenAstro's n-body work could be framed as relevant to their stellar dynamics focus, a collaboration is conceivable. In practice, they run their own specific science.

Einstein@home — Pulsar and gravitational wave searches. Not a fit.

World Community Grid (IBM) — Accepts applications for humanitarian science projects. OpenAstro's volunteer network aspect might fit their criteria. Application process exists: worldcommunitygrid.org/about_us/viewNewsArticle.do?articleId=593. They handle all volunteer acquisition and infrastructure. You supply the science app.

Universe@home (Poland) — Binary star evolution simulations. N-body adjacent; worth approaching if OpenAstro's science can be framed as population synthesis related.

What a Partnership Looks Like

You write your science app to BOINC's standard interface (stdin/stdout + file conventions). You send batches of work unit descriptions to the partner project's server via their "batch submission" API or via email/agreed upload process. Results come back as files.

Practical reality check: BOINC project operators are researchers with their own science to run. They will be helpful if your science is interesting and your app is stable and well-packaged. They will lose interest if your app crashes frequently, produces ambiguous results, or requires constant support. You still need to build and maintain the science app cross-platform. You just skip the server operations burden.

Best option if: You want BOINC-grade volunteer compute but do not want to operate a BOINC server. Asteroids@home is the most natural first contact given OpenAstro's astronomical focus.


4. Option C: Science United and BOINC Alpha

Science United

Science United (scienceunited.org) is an XSEDE-funded meta-scheduler on top of BOINC. Volunteers attach to Science United rather than to individual projects. Science United distributes their compute across participating projects based on the volunteer's stated preferences ("I care about astronomy").

What this means for OpenAstro: - If OpenAstro becomes a Science United member project, your jobs can receive volunteer compute from Science United's pool. - You still need to run a BOINC server. Science United doesn't replace the server — it just routes additional volunteers to you. - Application process: Contact Science United coordinators. They have an acceptance process for new projects.

Realistic expectation: Science United is in steady-state, low-growth mode. It's worth joining once your BOINC project is operational to get additional volunteer exposure, but it's not a shortcut to avoid infrastructure.

BOINC Alpha

BOINC Alpha is an internal UC Berkeley testing project. Not relevant for OpenAstro — it's for testing the BOINC client software, not for hosting science projects.


5. Option D: HTCondor

HTCondor is the other major volunteer/institutional grid computing system. It originated at the University of Wisconsin-Madison and is widely deployed at universities and research labs.

How HTCondor Differs from BOINC

Feature BOINC HTCondor
Target audience Anonymous public volunteers Known machines (lab workstations, clusters)
Authentication None (anyone can volunteer) Strong authentication; machines must be enrolled
Job types Long-running, batch Any: interactive, batch, MPI, GPU
Scheduling Server-driven (server pushes WUs) Client-driven (client negotiates for jobs)
Fault tolerance High (built for unreliable volunteers) Moderate (assumes more reliable machines)
Setup complexity High (build from source) Moderate (packages available, good docs)

When HTCondor Makes Sense for OpenAstro

If you have relationships with universities or research institutions, you can run HTCondor to access their idle workstations and computing allocations. For example: - A partner university astronomy department with 50 research workstations - A national computing allocation (see Option E) - Your own OpenAstro contributor network's institutional machines

HTCondor's Flock and Flocking features let jobs flow between independent HTCondor pools. If a partner institution's pool has idle capacity, your jobs can flock there.

Practical assessment: HTCondor requires buy-in from institutional IT administrators. You can't ask random volunteers to install HTCondor — it's designed for managed environments. For Stage 2, BOINC is the right tool for community volunteer compute. HTCondor becomes relevant if OpenAstro lands a formal academic partnership (Stage 3+).

HTCondor + BOINC together: Some large projects run HTCondor internally for their own cluster compute, and BOINC for volunteer compute. The OpenAstro equivalent: HTCondor on any institution-provided resources, BOINC for the community volunteer layer.


6. Option E: Academic Compute Allocations

US: ACCESS (formerly XSEDE)

ACCESS (access-ci.org) provides free supercomputing allocations to US researchers. You apply for CPU-hours on national systems (Stampede3, Bridges-2, Delta, etc.).

Allocation tiers: - Explore: 200,000 CPU-hours. Application takes 1–3 weeks. Requires: project description, CV, basic justification. Very accessible. - Discover: 1.5M CPU-hours. Application takes 4–6 weeks. Requires a startup allocation first. - Accelerate/Maximize: Millions of CPU-hours. Full peer review, 6–12 months timeline.

200,000 CPU-hours is enormous for OpenAstro's Stage 1–2 needs: - TTV MCMC for 100 systems × 200 CPU-hours each = 20,000 CPU-hours. Leaves 180k for Lomb-Scargle, reprojection, ML.

Eligibility: US researchers at US institutions. Solo developer / independent researcher: borderline. You likely need an institutional affiliation. Strategy: collaborate with a faculty member at any university — they apply, you contribute the science and code.

Latency: Batch jobs on HPC clusters. Queue wait times range from minutes (off-peak) to days (peak). Fine for batch MCMC, not for transient follow-up.

EU: EuroHPC / PRACE successor

Similar to ACCESS but for European researchers. Partnership with a European university astronomy group could unlock this.

UK: DiRAC

The UK's distributed Research computing facility. Focused on astrophysics and particle physics. Competitive allocation process but highly relevant scientifically.

Strategic recommendation: At Stage 1, apply for an Explore ACCESS allocation. 200K CPU-hours is free, the application is relatively lightweight, and it covers all Stage 1 compute needs with massive headroom. Frame OpenAstro as "citizen science platform development for distributed time-domain astronomy."


7. Option F: Cloud Spot Instances (AWS/GCP/Azure)

The Economics

Cloud spot/preemptible instances are unused capacity sold at 60–90% discount. They can be terminated with 2 minutes notice (AWS) or 30 seconds notice (GCP).

Current spot pricing (March 2026, approximate):

Instance vCPUs RAM On-demand $/hr Spot $/hr Monthly at 100%
AWS c7i.xlarge 4 8 GB $0.178 $0.035 $25
AWS c7i.4xlarge 16 32 GB $0.714 $0.140 $101
GCP c3-highcpu-4 4 8 GB $0.176 $0.035–0.056 $25–40
GCP n2-highcpu-16 16 16 GB $0.612 $0.061 $44
Hetzner CCX13 2 8 GB €7.99 (no spot)

AWS spot is cheap but adds operational complexity: you must handle termination gracefully (checkpoint state to S3). GCP preemptible is similar but max 24-hour runtime.

For TTV MCMC specifically: A 12-hour MCMC job on a c7i.4xlarge (16 vCPU, TTVFast is not multiprocessor-aware, so use emcee's thread pool) costs 12 × $0.14 = $1.68. Run 50 systems per week = $84/week = $336/month. That exceeds the $15/month budget by 22×.

For 5 systems per week (the realistic Stage 2 scope): $8.40/week = $36/month. Still above budget but borderline.

Cloud bursting for transient follow-up: For time-sensitive transient classification (must return result in 10 minutes), cloud is the only option — BOINC is too slow. A Lambda/Cloud Function invocation (AWS Lambda, 3 GB RAM, 15-minute timeout) costs $0.0000166750 per GB-second. A 60-second classification job on 3 GB RAM = $0.003. Run 500 classifications per night = $1.50/night = $45/month. For Stage 1 (<100 candidates/night), this is under $5/month.

Recommendation: Use cloud Lambda/Functions for latency-sensitive transient classification (the only task with a real deadline). Use BOINC or ACCESS HPC for batch TTV MCMC.

Hetzner as the Right Cloud for OpenAstro

Hetzner consistently offers 2–3× better price/performance than AWS/GCP for CPU-only workloads: - CX21 (2 vCPU, 4 GB RAM): €5.77/month → equivalent to AWS c6i.large at $70/month - CCX23 (4 vCPU, 16 GB RAM): €16.90/month → equivalent to AWS c6i.xlarge at $130/month

For tasks where you need dedicated compute (no volunteers available, time-sensitive): - Spin up a CCX23 for a weekend burst (€0.025/hour = €1.20 for 48 hours): run all pending TTV MCMCs, terminate. - At 2× monthly: €2.40/month for burst compute, well within budget.


8. Option G: GitHub Actions and CI Runners

A creatively cheap option for small-scale compute. GitHub's free tier provides: - 2,000 minutes/month of GitHub Actions compute on Linux (2 vCPU, 7 GB RAM runners) - Up to 20 concurrent jobs on paid plans

Can you run MCMC in GitHub Actions? Yes. A 6-hour MCMC job runs as a GitHub Actions workflow. The output artifact (chain file) gets uploaded to GitHub Releases or Backblaze B2 via a step.

Problems: - 6-hour maximum job runtime on GitHub Actions (free tier: even shorter). - Only 2,000 minutes/month free = 33 hours. Not meaningful for serious MCMC. - This is against GitHub's ToS for "cryptocurrency mining or other compute-intensive work unrelated to software development." MCMC may or may not trigger this — it depends on GitHub's interpretation. Risky to depend on.

Verdict: Use for automated testing of your science pipeline code. Do not use for production compute.


9. Comparing All Options

Option Setup Time Monthly Cost Latency Scale Ceiling Best For
Local VPS (no BOINC) 0 $0–15 Low Low Stage 1, always
Self-hosted BOINC 2 weeks $7–9 High (hours) High (volunteer-dependent) Stage 2+ batch
Partner BOINC project 1–4 weeks (negotiation) $0 High Medium Stage 2 if partner agrees
ACCESS allocation 2–4 weeks (application) $0 Medium (HPC queue) Very High Any stage, US researcher
AWS/GCP spot 1 day $20–100 Low Unlimited Time-sensitive jobs only
Hetzner burst 1 hour $0–5 Low Medium Burst needs, budget-conscious
HTCondor Weeks + IT buy-in $0 Variable Medium Stage 3 with academic partner
Cloud Functions 1 day $1–10 Very low (<1s) Unlimited Transient classification

10. The Community Experience: Making Compute Donation Motivating

The telescope-volunteer community and the compute-volunteer community overlap almost completely for OpenAstro. The same people pointing their telescopes at Kepler targets are likely willing to donate CPU time when their telescope is closed during daylight or cloudy nights.

Why Astronomers Are Different From Generic BOINC Volunteers

Generic BOINC volunteers are motivated by: - Altruism (helping medical research) - Competition (BOINC credit rankings) - Screen saver aesthetics

OpenAstro amateur astronomer volunteers are motivated by: - Scientific contribution to work they understand - Seeing their CPU time produce results on their data - Co-authorship credit (already the main telescope incentive)

This is a stronger motivation structure. An amateur who spent three nights gathering photometry on Kepler-17b has a personal stake in seeing the MCMC results. "Your telescope data is now being analyzed by your computer" is a uniquely compelling message.

The Feedback Loop to Design

Volunteer submits photometry from their telescope
    ↓
Server informs them: "Your transit timing for Kepler-17b has been added
to the analysis queue. The MCMC will start tonight."
    ↓
Their BOINC client picks up a TTV work unit for Kepler-17b
    ↓
Result uploads, validator accepts
    ↓
Notification: "Your computer finished analyzing 3 nights of transit
data, including 2 of your own observations. Current best-fit perturber
period: 47.3 ± 2.1 days. See the updated posterior."

This closes the loop in a way no other BOINC project can. The volunteer is not abstractly "helping science" — they are computing the answer to the question their own telescope asked.

What the BOINC Setup Looks Like for Volunteers

Step 1: Download BOINC client (boinc.berkeley.edu — one-click installer for Windows, Mac, Linux)

Step 2: In the BOINC Manager, "Add project" → enter https://boinc.openastro.net

Step 3: Create account with same email as OpenAstro account

Step 4: Account is automatically linked to telescope contributor profile → compute credit shown on same dashboard as telescope contribution score

That's it. BOINC manages everything else: scheduling when to run (idle CPU only by default), throttling (never more than X% of CPU), network bandwidth limits.

Time to first work unit received: Typically 5–15 minutes after registration.

Motivation Mechanisms

1. Unified credit score: Don't have separate "telescope credit" and "compute credit." Have a single "OpenAstro contribution score" that combines both. Every transit observation submitted = +N points. Every MCMC work unit completed = +M points. Same leaderboard.

2. "Your CPU analyzed your data" notifications: Every time a work unit is validated that used data from that volunteer's telescope, send them a notification.

3. System-specific dashboards: Each BOINC volunteer's profile page shows "Systems you've contributed compute time to" — a list of specific Kepler/TESS targets with links to the current posteriors. This is personalized science output tied directly to their compute donation.

4. Named acknowledgment in papers: The current acknowledgment policy is "200 hrs telescope time = co-author." Consider adding "200 hrs BOINC CPU = acknowledgment." This is already standard for BOINC projects (Asteroids@home lists BOINC volunteers in paper acknowledgments).

5. "Night mode" volunteering: When a volunteer's observatory is shut due to weather, they can switch to "compute mode" — the BOINC client runs at full throttle until the next clear night alert arrives. Converts weather-blocked nights from frustration to productive contribution.

6. Progress visualizations: A simple page per target system showing the MCMC chain progress — "we have explored 3.2 × 10^8 points in parameter space, current median perturber period 47.3d, chain is converging." Volunteers see the computation happening in real time (updated as work units complete). This is viscerally satisfying for people who understand Bayesian inference.

7. Team compute: "OpenAstro Europe MCMC Team" — regional groups compete on total compute donated. Leaderboards are extremely effective for BOINC volunteer retention (the original SETI@home grew its community largely on this mechanic).

The Science App's Role in Motivation

The science app should produce real-time intermediate output, not just a final result. Design the science app to checkpoint every N MCMC steps and write a summary JSON (current best-fit, chain acceptance rate, estimated convergence time) to the output directory. BOINC uploads these checkpoint files periodically. The server renders them live.

This is more engineering, but the volunteer experience goes from "mysterious computation happening for 12 hours, then done" to "I can see the chain walking through parameter space, getting more confident."


Given the constraints (solo developer, $15/month budget, Stage 1 current state), here is the concrete recommended path:

Right Now (Stage 1)

  • All compute on local machine + Hetzner CX11 VPS
  • Celery + Redis for background job queue
  • TTV MCMC runs overnight locally or on a rented CCX23 for $1.20/weekend burst
  • No BOINC, no cloud functions, no HTCondor

Apply for an ACCESS Explore allocation in parallel. 200,000 CPU-hours for free. Application is 2–3 days of work. This is the highest ROI compute option available — do it during Stage 1 while the compute demand is low and the application process doesn't feel urgent.

Stage 1→2 Transition (first telescope volunteers joining)

  • Set up BOINC server on Hetzner CX21 (upgrade from CX11, same ~$6/month)
  • Start with Linux-only science app (simplest to package)
  • Frame volunteer onboarding as: "telescope contribution + compute contribution = full OpenAstro participant"
  • Deploy cloud function for transient classification (AWS Lambda, $2–5/month for hundreds of nightly classifications)

Stage 2 (20–50 telescope sites, 100+ compute volunteers)

  • Self-hosted BOINC carrying all batch TTV MCMC and period searches
  • ACCESS HPC for burst needs (free)
  • Lambda/Cloud Functions for time-sensitive transient classification
  • Explore Asteroids@home partnership to expand compute volunteer pool

Stage 3 (50+ sites)

  • BOINC mature, handling all standard jobs
  • Consider HTCondor if formal academic partnerships exist
  • Consider Science United membership for volunteer acquisition
  • Dedicated high-memory instance (CX41 or similar) for reprojection staging

See also: [[BOINC Integration Plan]] for detailed BOINC server setup, work unit design, and validation. See also: [[TTV Reverse N-Body Inference]] for the science case driving most of the compute demand.