DEV Community

Cover image for Our Lunar Landing Site API goes Live Today!
IrisDataLabs
IrisDataLabs

Posted on

Our Lunar Landing Site API goes Live Today!

The Problem: Finding a Safe Place to Land on the Moon is Really Hard

Imagine you're planning a lunar mission. You need to find a landing site that's:

  • Safe: Flat terrain, low roughness, minimal hazards
  • Sunlit: Enough solar visibility for power generation
  • Accessible: Close to your target location
  • Well-characterized: Backed by high-quality data

The traditional workflow? Download hundreds of gigabytes of NASA raster data, spend days in QGIS or ArcGIS running terrain analyses, manually compare sites, and repeat for every mission scenario.

Time required: 2-4 weeks per landing site evaluation.

For the Artemis program, NASA identified 13 candidate regions near the Lunar South Pole after extensive analysis of terrain data, illumination patterns, and accessibility—work that requires specialized GIS expertise and significant computational resources.

We wanted to make this process instant and accessible to anyone by pre-analysis and pre-computation.

The Solution: Turn Weeks of Analysis into Milliseconds

We built an API that pre-processes NASA's Lunar Reconnaissance Orbiter (LRO) data and serves analyzed landing sites through a fast, queryable interface.

Built by Iris Data Labs, this API is part of our broader effort to make lunar and planetary data more accessible for exploration and research.

Key features:

  • 🗺️ 1.18M pre-analyzed sites across the Lunar South Pole
  • Sub-100ms response times with PostGIS spatial indexing
  • 🤖 Smart recommendations with plain-English reasoning
  • 📊 60+ features per site: terrain metrics, illumination data, hazard scores
  • 🌍 Export anywhere: GeoJSON for QGIS, KML for Google Earth, CSV for Excel

Instead of spending weeks on GIS analysis, mission planners can now query:

curl "https://lunarlandingsiteapi.up.railway.app/api/v1/recommendations?lat=-89.5&lon=45&mission_type=artemis&top_n=5" \
  -H "X-API-Key: your_key_here"
Enter fullscreen mode Exit fullscreen mode

And get ranked recommendations in milliseconds.

The Tech Stack: Why These Choices Matter

FastAPI: Speed + Developer Experience

We chose FastAPI because it provides interactive API documentation out of the box. The auto-generated /docs endpoint serves our feature—beta testers in exploration and testing of the API.

Key architectural benefits:

  • Type validation: Catch coordinate errors (invalid lat/lon) before they hit the database
  • Auto-generated OpenAPI spec: Third-party tools can integrate automatically
  • Async support: Handle concurrent requests efficiently

PostgreSQL + PostGIS: Spatial Queries at Scale

For 1.18M points with spatial queries, we needed spatial indexing. PostGIS was the natural choice for geographic data at this scale.

Why PostGIS?

  • Battle-tested for spatial operations
  • GIST indexes make radius queries incredibly fast
  • Native geography types handle spherical distance calculations
  • Integrates seamlessly with standard SQL

The performance difference:

Without spatial indexing: Sequential scan through all 1.18M rows
With PostGIS GIST index: Direct lookup of relevant sites only

A properly indexed spatial query searching a 50km radius completes in 30-50ms even with millions of rows. That's the power of specialized geospatial indexing.

The geography type automatically handles distance calculations on a sphere, which is essential for lunar coordinates where standard Euclidean distance would be wildly inaccurate.

Railway: Simple Deployment

We deployed on Railway for simplicity:

  • PostgreSQL with PostGIS extension included
  • Auto-deploys from GitHub (push = live in 2 minutes)
  • Environment variables managed through their dashboard
  • Free tier perfect for beta testing

The deployment process during Beta phase: push to GitHub, wait 90 seconds, done. No Docker configs, no Kubernetes, no infrastructure complexity.

The Data Pipeline: From NASA Rasters to API

Data Sources

NASA's Lunar Reconnaissance Orbiter provides two critical datasets:

LOLA (Lunar Orbiter Laser Altimeter):

  • 5-meter resolution elevation data
  • Used to derive: slope, roughness, relief, hazard index

LROC (Lunar Reconnaissance Orbiter Camera):

  • Illumination maps at 60-240m resolution
  • Shows solar visibility percentage over a lunar day

Total raw data: ~300GB of GeoTIFF rasters covering the South Pole region.

Processing Pipeline

Step 1: Site Generation
Created a grid of candidate sites spaced 200m apart across -90° to -83.5° latitude, resulting in 1.18M candidate locations.

Step 2: Terrain Analysis
For each site, computed metrics at three scales (25m, 50m, 100m radius):

  • Elevation statistics (mean, std deviation)
  • Slope analysis (mean, max, variability)
  • Roughness (surface texture)
  • Relief (elevation range)
  • Composite hazard index (0-3, lower is safer) All distances are measured in meters; slopes are expressed in degrees.

Step 3: Illumination Analysis
Analyzed LROC data to determine solar visibility:

  • Mean visibility (% of lunar day with sunlight)
  • Minimum visibility (worst spot in the area)
  • Variability (consistency of illumination)

Near the South Pole, some crater rims get >90% illumination (near-constant sunlight), while crater floors get <5% (permanent shadow). This is critical data for mission planning.

Step 4: Database Loading
Batch-loaded all 1.18M analyzed sites into PostgreSQL with spatial indexes. Total processing time: ~45 minutes. Database size: ~12GB with indexes.

The Smart Recommendation Engine

The most interesting endpoint is /recommendations, which uses mission-specific scoring to rank sites.

Mission-Specific Scoring

Different missions have different priorities, so the scoring weights adapt:

Artemis (Human Landing)

  • Safety: 50% weight (crew protection is paramount)
  • Illumination: 30% weight (power for life support)
  • Accessibility: 20% weight (reach target region)

Robotic Lander

  • Safety: 40% weight (protect instruments)
  • Illumination: 40% weight (solar power critical)
  • Accessibility: 20% weight

Rover Mission

  • Safety: 30% weight (can handle rougher terrain)
  • Illumination: 20% weight (less critical)
  • Accessibility: 50% weight (must reach science targets)

Each component score (0-10) is calculated from the underlying metrics:

  • Safety score: Based on hazard index, slope, roughness
  • Illumination score: Based on solar visibility percentage
  • Accessibility score: Based on distance to target location

Plain-English Reasoning

Every recommendation includes human-readable explanations:

Example output:

{
  "rank": 1,
  "site_id": 928969,
  "overall_score": 8.7,
  "reasoning": "Exceptional safety (hazard 0.43) with excellent visibility (87.3%), very close to target (8.2 km). Prioritizes crew safety for human landing.",
  "warnings": ["No significant concerns identified"],
  "strengths": [
    "Excellent safety rating (9.2/10)",
    "Outstanding solar visibility (87.3%)",
    "Very flat terrain (slope 2.1°)"
  ]
}
Enter fullscreen mode Exit fullscreen mode

This makes the API accessible to non-GIS experts. Mission planners understand why a site is recommended, not just that it scored well.

Authentication: Simple But Secure

For the beta, we implemented API key authentication with rate limiting:

Key features:

  • API keys in the format ldp_live_[32_random_chars]
  • Passed via X-API-Key header
  • Rate limited by tier (beta: 100 requests/day)
  • Usage tracked per key
  • Daily reset at midnight UTC

Architecture:

  • Keys stored in PostgreSQL with user metadata
  • Middleware validates on every request
  • Failed requests don't count against quota
  • Clear error messages guide users to sign up

This keeps the barrier to entry low while preventing abuse. Beta testers get their key instantly and can start querying within minutes.

Key Architectural Decisions

Why PostgreSQL + PostGIS Over NoSQL?

We considered MongoDB with geospatial indexes, but chose PostgreSQL + PostGIS because:

Data is relational and structured: Every site has the same 60+ features. A fixed schema makes sense.

Spatial queries are critical: PostGIS is battle-tested for geographic operations. The ST_DWithin function makes radius searches trivial.

ACID compliance matters: For a data API, consistency is more important than eventual consistency.

Performance was proven: PostGIS GIST indexes can handle millions of spatial queries efficiently.

The trade-off? More complex setup than a managed NoSQL service. But for this use case, relational + spatial was the right choice.

Why Denormalization?

Originally we considered normalized tables: sites, terrain_metrics, illumination_metrics.

We chose one wide table instead because:

  • Every query needs data from all three "tables" anyway
  • No JOINs = simpler queries, better performance
  • Data is static (no updates, just reads)
  • Easier to reason about for API consumers

The downside? Larger table size and some data duplication at different radius scales (25m, 50m, 100m). But with disk space cheap and query speed critical, it was the right trade-off.

Why Multiple Analysis Radii?

Each site has metrics at 25m, 50m, and 100m radius. This lets users:

  • Small landers: Use 25m radius for precise site analysis
  • Large landers: Use 100m radius for broader area assessment
  • Compare scales: Understand terrain variability

It triples the data volume but provides much more flexibility for different mission scales.

The Beta is Live Today! 🚀

After weeks of development and testing, We're opening the public beta today. The API is fully functional with all core features ready:

What's ready:

  • 1.18M pre-analyzed sites loaded and indexed
  • Sub-100ms query performance verified
  • Interactive documentation at /docs
  • Free beta tier (100 requests/day)
  • All endpoints tested and stable

Who we're looking for:

  • Mission planners evaluating landing sites
  • Academic researchers working with lunar data
  • GIS professionals who need processed NASA datasets
  • Space enthusiasts and students learning about lunar exploration
  • Developers building lunar simulation or planning tools

What we'd love to know:

  • What features would make this more useful for your work?
  • What data formats do you prefer?
  • What mission scenarios should I optimize for?
  • How does this fit into your existing workflow?

What's Next

Immediate priorities:

  • Gather feedback from early beta users
  • Monitor API performance under real usage
  • Iterate on features based on user needs

Potential features (depending on beta feedback):

  • Python SDK to simplify common queries
  • Advanced filtering (slope ranges, multi-criteria search)
  • WebSocket endpoint for real-time queries
  • Additional export formats

Long-term vision:

  • Expand coverage beyond South Pole (full lunar surface)
  • Mars landing site data (HiRISE + MOLA)
  • Machine learning for automated site ranking
  • 3D terrain visualization

But first: We want to see how people actually use this and what would make it more valuable.

Try It Today

The public beta is live right now. Sign up takes 30 seconds, and you'll get:

  • Free API key (100 requests/day)
  • Access to all 1.18M landing sites
  • Interactive documentation
  • Export in GeoJSON, KML, CSV

Get started: https://ldp-api-beta.vercel.app

Quick test:

curl -H "X-API-Key: your_key_here" \
  "https://lunarlandingsiteapi.up.railway.app/api/v1/recommendations?lat=-89.5&lon=0&mission_type=artemis&top_n=3"
Enter fullscreen mode Exit fullscreen mode

Interactive docs: https://lunarlandingsiteapi.up.railway.app/docs

We'd love to hear what you think! What features would be most useful? What's missing? Drop a comment or reach out at info@irisdatalabs.com.

Questions We're Still Figuring Out

As we launch this beta, there are some open questions we'd love the community's input on:

  1. Data formats: Is GeoJSON/KML/CSV enough? What other formats would be useful?
  2. Caching: Data is static—would Redis caching help for popular queries?
  3. SDKs: Python is obvious first choice. What other languages would you want?
  4. GraphQL: REST works great now, but would GraphQL be better for complex nested queries?
  5. Versioning: What's the best approach for handling breaking changes when we add /v2?

What would make this more useful for your work? We're particularly interested in hearing from:

  • Mission planners: What's missing for real mission planning?
  • Researchers: What analysis features would help?
  • GIS professionals: How does this fit your workflow?
  • Students: What would make this better for learning?

Today marks the official public beta launch. We're excited (and a bit nervous!) to see how people use this. If you're working on anything related to lunar exploration, mission planning, or space tech or you just think this is interesting—I'd love to connect.

Beta signup: https://ldp-api-beta.vercel.app
Questions/feedback: info@irisdatalabs.com

Written by the team at Iris Data Labs - building data and AI solutions for space technology. Follow us for future updates on Lunar Data Platform and space data APIs.

Top comments (0)