<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Deeksha Mahara </title>
    <description>The latest articles on DEV Community by Deeksha Mahara  (@deeksha_mahara).</description>
    <link>https://dev.to/deeksha_mahara</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/deeksha_mahara"/>
    <language>en</language>
    <item>
      <title>🎬 ProducerAI: The Real-Time Generative Video Studio</title>
      <dc:creator>Deeksha Mahara </dc:creator>
      <pubDate>Fri, 02 Jan 2026 05:54:29 +0000</pubDate>
      <link>https://dev.to/deeksha_mahara/producerai-the-real-time-generative-video-studio-2139</link>
      <guid>https://dev.to/deeksha_mahara/producerai-the-real-time-generative-video-studio-2139</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/mux-2025-12-03"&gt;DEV's Worldwide Show and Tell Challenge Presented by Mux&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;## What I Built&lt;/strong&gt;&lt;br&gt;
ProducerAI is a next-generation video studio concept that bridges the gap between Generative AI and Instant Streaming.&lt;/p&gt;

&lt;p&gt;Instead of the traditional "Prompt → Wait for Render → Download" workflow, ProducerAI uses an intelligent "Director Agent" that analyzes creative intent (e.g., "I need a cyberpunk city") and instantly broadcasts the corresponding asset using Mux's zero-latency infrastructure.&lt;/p&gt;

&lt;p&gt;It transforms video generation from a passive waiting game into an active, real-time broadcasting experience.✨ &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;## My Pitch Video&lt;/strong&gt;&lt;br&gt;
 

&lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://stream.mux.com/B1oGIZ01ZvUKU3uptb2mGOvMzdAnZCjofAsPRKWwBeNk.m3u8" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;stream.mux.com&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


 

&lt;p&gt;&lt;strong&gt;## Demo&lt;/strong&gt;&lt;br&gt;
 Vercel Link -&lt;a href="https://producer-ai-a251-69h153dn4-deeksha-maharas-projects.vercel.app/" rel="noopener noreferrer"&gt;https://producer-ai-a251-69h153dn4-deeksha-maharas-projects.vercel.app/&lt;/a&gt;&lt;br&gt;
Github Repo: &lt;a href="https://github.com/deeksha-mahara/producer-ai" rel="noopener noreferrer"&gt;https://github.com/deeksha-mahara/producer-ai&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;## The Story Behind It&lt;/strong&gt;&lt;br&gt;
 We are entering the era of Generative Video (Sora, Runway, etc.), but the user experience is still stuck in the past. We treat AI videos as heavy files that take minutes to render and download. This creates a "Render Gap" that kills the creative flow.&lt;/p&gt;

&lt;p&gt;I built ProducerAI to prove that the future isn't just about generating video—it's about streaming it.&lt;/p&gt;

&lt;p&gt;"What if an AI Studio felt like a Live Broadcast?" 🎥&lt;/p&gt;

&lt;p&gt;By combining a Next.js Chat Interface with Mux's HLS streaming, I created a prototype where the interface feels alive. The goal was to build a UI that feels like the cockpit of a sci-fi spaceship, where a Director commands an AI, and the screen responds instantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;## Technical Highlights&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;### Use of Mux *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This project relies entirely on Mux to deliver its value proposition. Without Mux, this would just be a slow video gallery.&lt;/p&gt;

&lt;p&gt;I utilized the Mux ecosystem in three specific ways:&lt;/p&gt;

&lt;p&gt;Instant Playback Switching: The core feature of the app is the ability to swap video sources dynamically based on AI chat responses. I used @mux/mux-player-react because of its ability to handle rapid source changes without the heavy buffering found in standard HTML5 players.&lt;/p&gt;

&lt;p&gt;1.&lt;strong&gt;HLS Infrastructure&lt;/strong&gt;: The app treats video assets not as files, but as streams. By utilizing Mux's HLS delivery, the application maintains a "Live" feel, even for pre-rendered content.&lt;/p&gt;

&lt;p&gt;2.&lt;strong&gt;Stream Telemetry&lt;/strong&gt;: In the Dashboard view, I built a "Stream Data" visualization that highlights the technical metrics Mux provides (Bitrate, Resolution, Latency), showcasing the importance of visibility in video infrastructure.&lt;/p&gt;

&lt;p&gt;3.The experience of building with Mux was seamless—the React component dropped right into my Next.js architecture, allowing me to focus on the "AI Logic" while Mux handled the heavy lifting of video delivery🚀.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt; 💭&lt;/p&gt;

&lt;p&gt;Building ProducerAI taught me that the biggest bottleneck in Generative AI isn't creation anymore—it's delivery.&lt;/p&gt;

&lt;p&gt;We often treat AI videos as heavy files that need to be downloaded, but Mux allowed me to reimagine them as lightweight, instant streams. This project proves that when you combine Next.js 14 with a robust video infrastructure, the line between "generating" and "broadcasting" disappears.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this look into the future of the Creator Economy!&lt;/p&gt;

&lt;p&gt;Thanks for reading and happy hacking! ✨🚀&lt;/p&gt;

&lt;p&gt;Tell below if you liked it ..Thoughts are always welcomed.✅&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>muxchallenge</category>
      <category>showandtell</category>
      <category>video</category>
    </item>
    <item>
      <title>How I Built a Disaster Resource Coordination Agent (DRCA) with Google Gemini</title>
      <dc:creator>Deeksha Mahara </dc:creator>
      <pubDate>Tue, 09 Dec 2025 12:38:22 +0000</pubDate>
      <link>https://dev.to/deeksha_mahara/how-i-built-a-disaster-resource-coordination-agent-drca-with-google-gemini-1m4j</link>
      <guid>https://dev.to/deeksha_mahara/how-i-built-a-disaster-resource-coordination-agent-drca-with-google-gemini-1m4j</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;-&lt;br&gt;
Disaster response is one of the most critical areas where speed and coordination matter. When I joined the Google &amp;amp; Kaggle 5-Day AI Agents Intensive, I didn't just want to build another chatbot. I wanted to build a system that could actually do something in a high-stakes environment.&lt;/p&gt;

&lt;p&gt;My Capstone Project, DRCA (Disaster Resource Coordination Agent), is a multi-agent system designed to streamline rescue operations by coordinating logistics, geolocation, and resource allocation instantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning Reflections&lt;/strong&gt;-&lt;br&gt;
Before this course, I viewed AI largely as a "Knowledge Engine"—something you ask questions to. The biggest shift in my mental model was understanding AI as an "Action Engine."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Concepts That Resonated&lt;/strong&gt;-&lt;br&gt;
🛠️Tool Use (Function Calling): The moment I connected Gemini to the Google Maps API, everything changed. Realizing that an LLM can "decide" to call a function, get coordinates, and then use that data to make a logistics decision was a breakthrough. It turned the model from a writer into a router.&lt;/p&gt;

&lt;p&gt;🤖Multi-Agent Orchestration: Building DRCA taught me that one giant prompt is rarely the answer. Splitting the brain into a "Triage Agent" (who assesses urgency) and a "Logistics Agent" (who finds the route) made the system far more reliable and debuggable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How My Understanding Evolved-&lt;/strong&gt;&lt;br&gt;
I used to think the smartness of an agent came from the model size. Now I realize the smartness comes from the architecture. A smaller model with well-defined tools and a clear system prompt often outperforms a larger model with a vague instruction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capstone Project: DRCA&lt;/strong&gt;&lt;br&gt;
The Problem-&lt;br&gt;
In the chaos of a disaster (flood, earthquake), human dispatchers are overwhelmed. Matching a victim's request ("We are stuck on the roof at Sector 4") to available resources ("Boat A is 2km away") takes too long manually.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Solution&lt;/strong&gt;&lt;br&gt;
DRCA is an automated dispatcher that uses a multi-agent workflow to:&lt;/p&gt;

&lt;p&gt;Parse incoming distress messages (even in multiple languages).&lt;/p&gt;

&lt;p&gt;Geolocate the incident using mapping tools.&lt;/p&gt;

&lt;p&gt;Calculate the distance to the nearest available rescue unit.&lt;/p&gt;

&lt;p&gt;Dispatch the unit and update the status.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How It Works&lt;/strong&gt;(The Architecture)&lt;br&gt;
I built this using Google Gemini Pro as the reasoning engine. The system is composed of three primary agents:&lt;/p&gt;

&lt;p&gt;🗣️&lt;em&gt;Intake Agent:&lt;/em&gt; Handles user communication and extracts intent.&lt;/p&gt;

&lt;p&gt;🗺️&lt;em&gt;Geospatial Agent:&lt;/em&gt; Interfaces with map data to calculate distances and routes.&lt;/p&gt;

&lt;p&gt;🚚&lt;em&gt;Dispatch Agent:&lt;/em&gt; Assigns resources based on the Geospatial Agent's data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkqqfodlc38aoty6e2nu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmkqqfodlc38aoty6e2nu.png" alt="drca" width="800" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;_The Logic (Code Snippet)&lt;/strong&gt;_&lt;br&gt;
One of the most challenging parts was getting the agent to reliably choose the "closest" unit. Here is a simplified look at the routing logic:&lt;/p&gt;

&lt;h2&gt;
  
  
  The Dispatch Agent evaluates candidates based on distance
&lt;/h2&gt;

&lt;p&gt;def find_nearest_unit(incident_loc, available_units):&lt;br&gt;
    candidates = []&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for unit in available_units:
    # The agent uses a tool to calculate real-world distance
    distance = calculate_distance(incident_loc, unit.location)
    candidates.append((unit, distance))

# Sort by distance and return the optimal unit
return sorted(candidates, key=lambda x: x[1])[0]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;*&lt;em&gt;Demo Video- *&lt;/em&gt;&lt;br&gt;
Here is the system in action. You can see the agents handing off tasks from Triage to Dispatch in real-time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://youtu.be/JJ7wVBwHw9A" rel="noopener noreferrer"&gt;https://youtu.be/JJ7wVBwHw9A&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Future Improvements-&lt;/strong&gt;&lt;br&gt;
While the current version works for text-based coordination, I plan to integrate:&lt;/p&gt;

&lt;p&gt;&amp;lt;🎙️Voice Inputs: Allowing victims to send audio messages that are transcribed and processed.&lt;/p&gt;

&lt;p&gt;&amp;lt;👁️Vision Capabilities: allowing the agent to analyze images of the disaster scene to estimate severity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion-&lt;/strong&gt;&lt;br&gt;
The 5-Day Intensive was a crash course in the future of software. Building DRCA showed me that we are moving away from writing code that solves problems, to writing code that manages agents that solve problems.&lt;/p&gt;

&lt;p&gt;🙏 Thanks for Reading!&lt;br&gt;
If you found this breakdown helpful, please drop a reaction (❤️/🦄) belowit helps others find this guide!&lt;br&gt;
I’d love to hear your thoughts:&lt;/p&gt;

&lt;p&gt;Have you tried building multi-agent systems yet?&lt;/p&gt;

&lt;p&gt;What tools are you using for your "Agentic" workflows?&lt;/p&gt;

&lt;p&gt;Let’s discuss👇&lt;/p&gt;

</description>
      <category>googleaichallenge</category>
      <category>ai</category>
      <category>agents</category>
      <category>devchallenge</category>
    </item>
  </channel>
</rss>
