DEV Community

Cover image for Voice of Earth: What If Nature Could Speak Back?
Preeti yadav
Preeti yadav

Posted on

Voice of Earth: What If Nature Could Speak Back?

DEV Weekend Challenge: Earth Day

๐ŸŒ What I Built

Most apps tell you to โ€œsave the planet.โ€

I wanted to build something that makes you pause and feel it instead.

Voice of Earth is an AI-powered interactive experience where nature speaks back to you.

Users can choose elements like a river, forest, air, or mountains โ€” and receive a deeply emotional, AI-generated response based on their location and environmental context.

Instead of dashboards and statistics, this project focuses on empathy over information โ€” turning environmental awareness into a personal conversation.


๐ŸŽฅ Demo

๐Ÿ”— Live App: https://voice-of-earth.vercel.app/
๐Ÿ”— GitHub Repo: https://github.com/preeti-3/voice-of-earth

Try selecting an element and entering your city โ€” then just listen.


๐Ÿ’ป Code

The full source code is available here:
๐Ÿ‘‰ https://github.com/preeti-3/voice-of-earth


โš™๏ธ How I Built It

๐Ÿง  AI Layer (Core of the Experience)

I used Google Gemini to generate emotionally rich, context-aware responses.

Instead of simple prompts, I designed structured prompts that:

  • Assign a role (e.g., โ€œYou are a river in Panipatโ€)
  • Control tone (poetic, reflective, human-like)
  • Limit output length for clarity
  • Adapt based on environmental context

This ensured responses felt alive, not robotic.


๐ŸŽจ Frontend Experience

Built with:

  • Next.js (App Router)
  • Tailwind CSS
  • Framer Motion (for smooth animations)

Key UI decisions:

  • Full-screen cinematic backgrounds for each nature element
  • Glassmorphism panels to keep focus on the message
  • Smooth transitions to create an immersive feel
  • Minimal UI to let the AI voice take center stage

๐ŸŒ Environmental Context Layer

Instead of generic responses, I introduced contextual awareness using:

  • City-based environmental data (mocked for now but structured for real APIs)

This allows responses to feel:

โ€œlocal, relevant, and personalโ€


๐ŸŽง Voice Interaction

To deepen the experience, I added:

  • Browser-based Text-to-Speech (SpeechSynthesis API)

This lets users hear nature speak, making the interaction more memorable and emotional.


๐Ÿงฉ Key Challenges & Decisions

1. Avoiding Generic AI Output

Early responses felt repetitive and โ€œAI-like.โ€

โœ… Solved by:

  • Strong prompt engineering
  • Clear tone constraints
  • Role-based storytelling

2. Balancing Design vs Performance

I initially considered video backgrounds, but:

โŒ Heavy and distracting
โœ… Switched to high-quality images + subtle animations

Result:

  • Smooth performance
  • Cinematic feel

3. Making It Feel Like an Experience

Instead of adding more features, I focused on:

  • Emotion
  • Simplicity
  • Flow

๐Ÿ† Prize Categories

โœ… Best Use of Google Gemini
The entire experience is powered by Gemini, with carefully engineered prompts to create emotionally intelligent, context-aware responses.


๐Ÿ’ญ Final Thoughts

This project started with a simple question:

โ€œWhat if nature could respond to us?โ€

And the answer wasnโ€™t data.

It was emotion.

In a world full of dashboards and metrics, sometimes what we need most is a moment to listen.

Top comments (0)