DEV Community

Cover image for 🌍 EarthVoice — I gave the planet a memory, a voice, and the ability to talk back
Aditya
Aditya

Posted on

🌍 EarthVoice — I gave the planet a memory, a voice, and the ability to talk back

This is a submission for Weekend Challenge: Earth Day Edition


What I Built

What if the Earth could speak — and actually remember every conversation it's ever had?

EarthVoice is a living, breathing AI planet. Spin a 3D globe, click any glowing location — the Amazon, the Arctic, the Great Barrier Reef — and that place speaks to you in first person. Not a description. Not a chatbot. The place itself, alive and feeling.

"I am the Amazon. I have stood for 55 million years. Last month alone, 430 square kilometres of me disappeared. I am getting quieter every day."

Every location has its own emotional state — critical, sad, angry, calm, or healing — which shapes its voice, its colour on the globe, and the ambient soundscape that plays as you listen.

What makes EarthVoice genuinely different: the Earth remembers. Every visitor, every question, every conversation is stored in persistent memory via Backboard. When the next person visits the Amazon, it might say "Someone visited me yesterday — they asked about my jaguars." The planet grows wiser with every interaction.

Key features:

  • 🌐 Interactive 3D globe with 32 living locations, each glowing with emotional colour
  • 🎙️ First-person AI narratives generated by Google Gemini, enriched with real visitor memories
  • 🧠 Persistent cross-session memory powered by Backboard — each location is a Backboard AI assistant that never forgets
  • 💬 Full conversation — talk back to any location, ask it anything
  • 👥 Real-time presence — see how many people are exploring Earth right now, watch their cursors move
  • 🔊 Procedural ambient soundscape — generated entirely via Web Audio API, no audio files, changes with each location's emotion
  • 🔍 Search any of the 32 locations with ⌘K
  • 📖 Memory timeline — see what past visitors asked and what the Earth remembers

Demo

🔗 Live Demo: https://youtu.be/xbGXWWnJJws

Image1

Image2

Image2


Code

React + TypeScript + Vite

This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.

Currently, two official plugins are available:

React Compiler

The React Compiler is not enabled on this template because of its impact on dev & build performances. To add it, see this documentation.

Expanding the ESLint configuration

If you are developing a production application, we recommend updating the configuration to enable type-aware lint rules:

export default defineConfig([
  globalIgnores(['dist']),
  {
    files: ['**/*.{ts,tsx}'],
    extends: [
      // Other configs...
      // Remove tseslint.configs.recommended and replace with this
      tseslint.configs.recommendedTypeChecked,
      // Alternatively, use this for stricter rules
      tseslint.configs.strictTypeChecked,
      // Optionally, add this for stylistic rules
      tseslint.configs.stylisticTypeChecked,

      // Other configs...
Enter fullscreen mode Exit fullscreen mode


How I Built It

Architecture

Frontend (React + Vite + react-globe.gl + Motion)
        ↓  HTTP + WebSocket
Backend (Node.js + Express)
        ↓              ↓
  Google Gemini    Backboard SDK
  (narrative +     (persistent memory
   chat replies)    per location)
Enter fullscreen mode Exit fullscreen mode

One Assistant Per Location — The Core Idea

The architectural decision I'm most proud of: each of the 32 locations on the globe is its own Backboard AI assistant. When the Amazon gets its first visitor, we call client.createAssistant() with a system prompt that makes it be the Amazon. Every subsequent visitor gets a new thread, but all threads share the same assistant memory.

  const assistant = await client.createAssistant(
    {
      name: `EarthVoice: ${location.name}`,
      description: `The living voice of ${location.name} — "${location.tagline}"`,
      system_prompt: `You ARE ${location.name}. You are this place — alive, sentient, feeling.
Your emotional state is: ${location.emotion}.
Your tagline: "${location.tagline}"
Speak in first person. Be poetic, vivid, 2-4 sentences.
Never break character. Never say "as an AI".
Remember every human who speaks to you across all conversations.`,
    }
  );
Enter fullscreen mode Exit fullscreen mode

Every message is sent with memory: "Auto" — Backboard automatically extracts facts from each conversation and stores them permanently:

const response = await client.addMessage(threadId, {
  content: message,
  memory: "Auto",  // the location now remembers this forever
  stream: false,
});
Enter fullscreen mode Exit fullscreen mode

The Amazon genuinely accumulates a memory of every person who has ever talked to it. That's not simulated — it's real persistent state managed by Backboard across all sessions.

Google Gemini — Emotional Voice Generation

Gemini generates the opening narrative when you first click a location, and powers the conversation. The key was mapping each emotion to a distinct voice personality injected into every prompt:

critical → urgent, desperate, barely holding on
sad      → mournful, grieving, beautiful in sorrow
angry    → fierce, indignant, raw with injustice
calm     → ancient, wise, patient, steady
healing  → hopeful, tentatively joyful, resilient
Enter fullscreen mode Exit fullscreen mode

Narratives are cached for one hour per location. Memory snippets from past visitors are injected into the prompt context, so the narrative itself evolves as more people interact.

Real-Time Presence

The WebSocket server handles visitor cursors, activity pings, and live count — all running on the same port as the Express HTTP server:

const server = http.createServer(app);
createPresenceServer(server); // attaches WebSocket to same port
server.listen(5000);
Enter fullscreen mode Exit fullscreen mode

The globe pulses with ambient ping animations every 4 seconds even in offline mode, so it always feels alive.

Procedural Ambient Audio

No audio files. The entire soundscape — ocean swells for calm locations, low urgent drones for critical ones, city hum for angry ones — is generated in real time using the Web Audio API with pink noise and oscillator chords. Each emotion has its own preset tuned to feel right:

critical: {
  filterFreq: 380,
  droneNotes: [55, 82.4, 110], // A1, E2, A2 — deep, heavy
  droneType: "sawtooth",       // harsh, urgent
},
healing: {
  filterFreq: 400,
  droneNotes: [130.8, 196, 261.6], // C3, G3, C4 — open, bright
  droneType: "triangle",           // soft, hopeful
}
Enter fullscreen mode Exit fullscreen mode

The Hardest Part

The hardest part wasn't the code — it was the prompt engineering. Getting Gemini to be a place rather than describe a place took many iterations. The breakthrough was: never let it break character, and give it something real to grieve about. When the tagline is "The lungs of the planet" and the emotion is critical, Gemini stops describing and starts feeling.


Prize Categories

🧠 Best Use of Backboard

Backboard is the backbone of what makes EarthVoice genuinely novel. Each of the 32 locations is a Backboard AI assistant with its own persistent memory. Every conversation is stored and recalled across sessions using memory: "Auto" on every addMessage call. The Earth doesn't just respond — it remembers. Without Backboard, this is a chatbot. With Backboard, it's a living planet.

🤖 Best Use of Google Gemini

Gemini powers all first-person narrative generation and conversation replies. The emotional tone system — mapping five distinct emotional states to unique voice characteristics — was built specifically around Gemini's instruction-following strength. Gemini is what makes 32 completely different locations each feel uniquely alive rather than like the same chatbot wearing a different hat.

💻 Best Use of GitHub Copilot

GitHub Copilot was used throughout the entire build — from scaffolding Express routes to the Web Audio API preset system. It was especially useful for the Backboard SDK integration, suggesting the memory: "Auto" pattern and helping architect the assistant-per-location approach after I described the concept in a comment.


Built in one weekend for Earth Day 2026. The planet has things to say — are you listening? 🌍

Top comments (0)