DEV Community

Manjunath Patil
Manjunath Patil Subscriber

Posted on

Earth Day Study Companion: For Teach-Ins and Climate Learning

DEV Weekend Challenge: Earth Day

This is a submission for Weekend Challenge: Earth Day Edition

What I Built

I built Earth Day Study Companion, a climate learning workspace designed to help people move from reading, to understanding, to teaching, to action.

The idea came from a simple problem. A lot of Earth Day and climate learning happens in fragments. Someone reads a long PDF report, watches a few videos, collects some slides, saves a few articles, and then tries to turn all of that into a class, a club session, a workshop, or a community discussion. The information exists, but the workflow is messy. Research is disconnected from teaching. Planning is disconnected from delivery. Good material is often trapped inside dense documents that are hard to search and even harder to teach from.

I wanted to build a product that makes that process feel connected from start to finish.

Earth Day Study Companion has three core experiences: Climate Library, Teach-In Builder, and Teach-In Facilitator.

Climate Library is the research layer. Users can upload climate reports, sustainability guides, Earth Day toolkits, policy PDFs, and other learning material. The system stores the documents, processes them, chunks them into smaller pieces, creates embeddings, and uses retrieval before answering questions. That means the experience is not just a generic chatbot on top of a file upload. It is a grounded document assistant built on top of retrieval augmented generation. Users can ask questions about what is inside the document, open the relevant page, and jump back to the exact source material that informed the answer. This is especially useful for long climate documents where the user needs clarity, context, and source confidence.

The library also becomes more useful because it is connected to live multimodal interaction. A user can turn on the microphone and ask questions naturally. They can also share the screen and ask if a chart, image, article, PDF section, or slide is worth using. This matters in real Earth Day preparation because people do not only work with text. They work with visual material, reports, and presentation content. The screen share helps Gemini reason over what is visible, so the user can ask practical questions like whether a page is relevant, whether a graph is clear enough, or whether a specific visual should be included in a session.

Teach-In Builder is the structure layer. Once the user knows the topic they want to teach, the Builder turns that topic into a learning pathway. Instead of producing only a plain outline, the system creates both a pathway view and a mind map. The pathway helps with order and progression. The mind map helps with relationships and scope. This makes it easier to take a broad subject like renewable energy, climate justice, biodiversity, food systems, or circular economy and turn it into something teachable.

Each generated module can then be expanded into Guide, Practice Lab, and Field Media. The Guide is for explanation and learning flow. Practice Lab helps turn passive reading into active thinking. Field Media connects the topic to supporting examples and related material. I wanted the Builder to feel like a real educational workspace, not a one shot prompt output. The result is something that can help a student explore a topic, but it can also help a facilitator or organizer design a real Earth Day learning session.

Teach-In Facilitator is the planning layer. This is where the project moves from learning support into real event preparation. The user starts by filling in a session brief with the audience, venue or context, title, duration, goals, focus areas, and available materials. From there, Gemini acts like a live planning partner. The user can talk through the idea naturally, refine the session structure, and decide how the event should feel for that specific audience.

This is also where the live controls become very important. The microphone supports a natural planning conversation. Screen share allows the facilitator to show slides, PDFs, webpages, images, and other planning material while asking Gemini for feedback. The webcam adds live visual context during the planning session. In practice, this makes the product feel much closer to a real coaching partner than a standard prompt box. A user can ask if a resource looks relevant, if a visual seems clear, or if a piece of material fits the tone of the session they are planning.

As the session develops, the app turns that planning process into a structured output with a summary, learning objectives, agenda, materials, and community actions. At the end, the system generates a facilitator report as a downloadable PDF. That final step is important because it turns a live planning session into something reusable. The user leaves with a practical artifact they can actually use for a school club, a local workshop, a library event, or a community Earth Day gathering.

My intended goal with this project was to build something that feels genuinely useful for Earth Day. Not just a climate themed UI, not just an AI wrapper, and not just a static educational demo. I wanted to build a tool that helps people understand climate material, turn it into a teaching structure, and prepare a real session around it.

Demo

Live demo: https://earthdaycompanion.vercel.app/

Video demo: YOUR_YOUTUBE_LINK

In the demo, I walk through the product as a full learning and planning flow.

I start inside Climate Library by uploading a climate related PDF and asking questions about it. This shows how the system indexes the document, retrieves relevant chunks, and answers in a way that stays tied to the uploaded material. I also show page level navigation and jumping back to the relevant part of the PDF, because that is one of the most important parts of the library experience. I wanted viewers to see that the system is not guessing. It is actually working with the document.

Then I move into Teach-In Builder and generate a structured pathway on an Earth Day topic. I show the pathway view and the mind map view, then open a module to show how the Guide, Practice Lab, and Field Media sections work. This part demonstrates how the product turns a broad environmental topic into a teachable sequence instead of only summarizing it.

Finally, I open Teach-In Facilitator and show how the app can be used as a live planning partner for a real Earth Day session. I walk through the brief, start the live flow, and show how Gemini helps shape the structure of the teach-in. I also show how screen sharing can be used to review materials visually during the planning process. At the end, I generate the facilitator report PDF to show how the live planning flow becomes something concrete and reusable.

Code

GitHub repository: https://github.com/ladiesmans217/Earth-Day-Challenge

The project is built with a React and TypeScript frontend and a Python backend. The frontend handles the user experience across the three main flows, while the backend handles document processing, retrieval, generation, and report output.

On the document side, the backend stores uploaded PDFs, chunks the content, creates embeddings, and uses ChromaDB for retrieval. On the live interaction side, Gemini powers the voice based multimodal experience. On the planning side, Gemini function calling is used to create a structured teach-in plan. On the output side, the backend generates a facilitator report PDF so the planning session ends with a practical result.

How I Built It

I built the product around a three step model: study, structure, and facilitate.

For the study layer, the main focus was grounding. Climate material is often long, technical, and dense, so I did not want a system that simply accepted a PDF and then answered in a vague way. When a user uploads a document into Climate Library, the backend stores the file, extracts the text, splits it into chunks, creates embeddings, and stores them in ChromaDB. When the user asks a question, the backend retrieves the most relevant chunks and passes them into the model as context. That creates a proper retrieval augmented generation flow instead of a general chat flow. It also allows the app to support citations, page navigation, and highlighted source jumps back into the document.

That source connection was important to me because climate literacy is not only about getting an answer. It is about trusting where the answer came from. If someone is preparing an Earth Day session, a school lesson, or a community discussion, they need to be able to go back to the original material and verify what they are using.

For the structure layer, I wanted to go beyond a single generated course outline. That is why Teach-In Builder creates both a pathway and a mind map. Those two views do slightly different jobs. The pathway helps the user think in sequence, while the mind map helps the user think in connections. Once a module is opened, the system expands it into Guide, Practice Lab, and Field Media so the topic becomes something a person can actually work through and teach from. This part of the build was about making generated content feel usable, not just impressive.

For the facilitate layer, I adapted the live assistant flow into a planning companion for real Earth Day events. The user starts with a structured brief, then moves into a live Gemini session where the focus is on audience fit, session flow, materials, and next steps. Function calling is used to turn that planning flow into a structured output with learning objectives, agenda, materials, and community actions. I wanted this to feel like an event planning tool, not a generic live chat demo.

Gemini is the key technology across the whole project. I used it for live voice interaction, multimodal context, structured teach-in planning, and document assistance when paired with retrieval. In Climate Library, Gemini helps turn indexed PDFs into an interactive learning experience. In Teach-In Builder, it supports turning broad topics into structured educational pathways. In Teach-In Facilitator, it helps shape a real Earth Day session that can actually be delivered to an audience.

The shared live control tray also became a meaningful part of the product. The microphone supports a more natural planning and exploration flow. Screen share makes the assistant useful for real materials, not just typed prompts. Webcam adds live visual context to the session. Together, those controls make the app feel more like a working multimodal study and facilitation environment.

I also spent time on the interface direction because I did not want the project to feel like a generic AI dashboard. I moved away from a loud or overly synthetic look and shaped it into something more like an editorial field guide for climate learning. The goal was to make the experience feel grounded, readable, and specific to Earth Day rather than looking like a general purpose AI tool with green branding.

The biggest thing I learned while building this was that climate education is a strong and practical Earth Day direction. Many projects in this space focus only on tracking or visualization. Those are useful, but I wanted to build around the human part of environmental action: understanding the material, organizing it, and helping other people learn from it. That is where I think this project is strongest.

Prize Categories

I am submitting this project for Best Use of Google Gemini.

Gemini is central to the project, not an extra layer added on top. It powers the live voice interaction, the multimodal reasoning over shared visual context, the structured teach-in planning flow, and the grounded assistance in the document workflow when combined with retrieval.

This is a solo submission.

Top comments (0)