<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: DHRUVA WANI</title>
    <description>The latest articles on DEV Community by DHRUVA WANI (@dhruva_wani_17).</description>
    <link>https://dev.to/dhruva_wani_17</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dhruva_wani_17"/>
    <language>en</language>
    <item>
      <title>Beyond the Chatbot: A First Look at the Gemini Agent Development Kit (ADK)</title>
      <dc:creator>DHRUVA WANI</dc:creator>
      <pubDate>Wed, 29 Apr 2026 07:53:00 +0000</pubDate>
      <link>https://dev.to/dhruva_wani_17/beyond-the-chatbot-a-first-look-at-the-gemini-agent-development-kit-adk-1kdn</link>
      <guid>https://dev.to/dhruva_wani_17/beyond-the-chatbot-a-first-look-at-the-gemini-agent-development-kit-adk-1kdn</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-cloud-next-2026-04-22"&gt;Google Cloud NEXT Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Google Cloud NEXT ‘26 has made one thing abundantly clear: we are officially shifting from the "Chatbot Era" to the "Agentic Era." &lt;/p&gt;

&lt;p&gt;When building complex applications—especially those integrating multi-modal AI or vision agents with sleek user interfaces—the biggest bottleneck has always been orchestration. We've been missing a standardized way to bridge the gap between AI generating text and AI actually &lt;em&gt;doing&lt;/em&gt; things. The newly announced &lt;strong&gt;Gemini Agent Development Kit (ADK)&lt;/strong&gt; looks to be exactly that bridge. &lt;/p&gt;

&lt;p&gt;Here is my first look at the ADK, how it works, and why it is about to change how we architect cloud infrastructure.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;💡 What is the Gemini ADK?&lt;/strong&gt;&lt;br&gt;
At its core, the ADK is an open-source framework designed to help developers build autonomous agents. Instead of just prompting an LLM to generate a script, you can empower an agent to update a database, trigger a CI/CD workflow, or interact with a legacy API autonomously.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy71gckvw35ibm59ls7va.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy71gckvw35ibm59ls7va.png" alt="Getting Started with ADK" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠 Getting Started: The Workflow
&lt;/h2&gt;

&lt;p&gt;The new kit formalizes the agent creation process into a clean, developer-friendly workflow. If you're used to spinning up backend logic in Python or deploying full-stack apps, the orchestration syntax will feel right at home.&lt;/p&gt;

&lt;p&gt;
  💻 Click here to see the initialization command
  &lt;br&gt;
Getting started is as simple as pulling the open-source package:



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
bash
pip install google-adk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;/p&gt;

&lt;p&gt;The development lifecycle breaks down into three core phases:&lt;/p&gt;

&lt;p&gt;Define the Goal: You start by defining a "Mission" for your agent. What is its ultimate objective?&lt;/p&gt;

&lt;p&gt;Tool Wiring: Next, you connect the agent to the Agentic Data Cloud, providing it with the specific APIs, databases, and permissions it needs to complete its mission.&lt;/p&gt;

&lt;p&gt;Deployment: You package the agent into a container and push it to production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxji3y5hjt7c5c9wj45r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxji3y5hjt7c5c9wj45r.png" alt="Develop/Package/Deploy flow" width="800" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once packaged, the deployment flexibility is fantastic. You can deploy it to the new Vertex AI Agent Engine, run it on Custom Infrastructure, or push it directly to Cloud Run. Deploying to Cloud Run feels like an incredibly natural extension for anyone who already relies on it for hosting fast, scalable React or Next.js web apps.&lt;/p&gt;

&lt;p&gt;💻 The Developer Experience: Testing Locally&lt;br&gt;
What really stood out to me is how native the local development experience feels. The ADK sets you up with a clean, standard Python file structure (agent.py, .env).&lt;/p&gt;

&lt;p&gt;Once you set up your virtual environment, you can run the adk web command. Under the hood, this spins up a local Uvicorn server on port 8000, bringing up a built-in chat interface for immediate testing. If you are accustomed to building modern Python web backends, this setup loop is going to feel incredibly seamless.&lt;/p&gt;

&lt;p&gt;(👇 INSERT SCREENSHOT 4 HERE: "ADK file structure and local testing interface")&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1sij0p855c9fpei9l0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1sij0p855c9fpei9l0s.png" alt="ADK file structure and local testing interface" width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the example above, you can see the true power of tool wiring. The agent isn't just guessing; it uses a custom get_vm_issue_details_from_logs python function to actively query Google Cloud Logging, parse the specific compute.instances.stop audit log, and return exactly who (or what API call) spun down the VM. It turns your IDE into a functional command center.&lt;/p&gt;

&lt;p&gt;🔒 Agent Identity: Security First&lt;br&gt;
If you are going to let an AI loose in your cloud environment, observability is paramount. One of the standout features of the ADK isn't just what the agents can do, but how they are tracked.&lt;/p&gt;

&lt;p&gt;Agents in the ADK are assigned their own traceable identities. If an agent tries to modify a production database or interact with a sensitive storage bucket, the system allows you to trace exactly which agent executed the action and audit the reasoning loop that led to that decision.&lt;/p&gt;

&lt;p&gt;🧠 The Evolution of Context&lt;br&gt;
We've been steadily moving along an evolutionary track. We started with basic prompt-and-response LLMs, moved to Retrieval-Augmented Generation (RAG) to ground models in fact, and then began adding basic tools.&lt;/p&gt;

&lt;p&gt;Now, as highlighted in the keynote, we are entering the realm of complex reasoning loops and multi-agent systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyjk2dqmt2evvg3zejud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhyjk2dqmt2evvg3zejud.png" alt="LLM evolution" width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🤔 The Critique: Can it handle the latency?&lt;br&gt;
Google is clearly providing the infrastructure to treat AI as an autonomous worker rather than just an assistant. The shift from Vertex AI Search to Agent Studio suggests that every developer is about to become an "orchestrator" of specialized agents.&lt;/p&gt;

&lt;p&gt;However, latency remains a massive question mark.&lt;/p&gt;

&lt;p&gt;Running a multi-agent system that needs to "think," query a Cross-Cloud Lakehouse on AWS, and then execute an action back on GCP introduces significant round-trip delays. While Google's hardware is top-tier, testing the new TPU 8i inference speeds will be the real trial by fire to see if it can handle these multi-step reasoning loops in real-time without timing out or creating sluggish user experiences.&lt;/p&gt;

&lt;p&gt;🚀 Wrapping Up&lt;br&gt;
"Generative AI" is rapidly just becoming standard "Cloud Computing."&lt;/p&gt;

&lt;p&gt;If you aren't building agents yet, the google-adk seems like the best, most structured place to start. It takes the abstract concept of "AI agents" and grounds it in the familiar territory of containers, cloud deployments, and standard libraries.&lt;/p&gt;

&lt;p&gt;What NEXT '26 announcement are you most excited to build with? Let's discuss in the comments! 👇&lt;/p&gt;



</description>
      <category>devchallenge</category>
      <category>cloudnextchallenge</category>
      <category>googlecloud</category>
    </item>
    <item>
      <title>From Webcam to Wellness: Building a Real-Time AI Assistant for Students</title>
      <dc:creator>DHRUVA WANI</dc:creator>
      <pubDate>Sun, 01 Mar 2026 06:59:07 +0000</pubDate>
      <link>https://dev.to/dhruva_wani_17/from-webcam-to-wellness-building-a-real-time-ai-assistant-for-students-2jj6</link>
      <guid>https://dev.to/dhruva_wani_17/from-webcam-to-wellness-building-a-real-time-ai-assistant-for-students-2jj6</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/weekend-2026-02-28"&gt;DEV Weekend Challenge: Community&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Community
&lt;/h2&gt;

&lt;p&gt;This project is built for my college community.&lt;br&gt;
In my college, students spend 8–12 hours daily in front of screens — coding, studying, preparing for placements, and meeting deadlines.&lt;br&gt;
Over time, this leads to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chronic poor posture&lt;/li&gt;
&lt;li&gt;Fatigue&lt;/li&gt;
&lt;li&gt;Stress-related breathing patterns&lt;/li&gt;
&lt;li&gt;Reduced physical awareness
The problem? Most students don’t realize it until discomfort becomes pain.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I wanted to build something proactive — a system that checks in before the damage happens.&lt;/p&gt;
&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;



&lt;p&gt;I built a Real-Time AI Medical Wellness Assistant that transforms a simple video call into a quick wellness check.&lt;/p&gt;

&lt;p&gt;In just 8–10 seconds, the assistant:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Analyzes posture (slouching, shoulder imbalance, head tilt)&lt;/li&gt;
&lt;li&gt;Estimates breathing rate using chest movement&lt;/li&gt;
&lt;li&gt;Detects visible fatigue indicators&lt;/li&gt;
&lt;li&gt;Provides empathetic verbal feedback&lt;/li&gt;
&lt;li&gt;Generates a structured PDF wellness report&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is not a medical diagnostic tool — it is a preventive awareness system designed to help students self-correct early.&lt;/p&gt;

&lt;p&gt;The goal is simple:&lt;br&gt;
Make wellness accessible, instant, and frictionless.&lt;/p&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;



&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/xvB0jJFVnRI"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/dhruvawani17" rel="noopener noreferrer"&gt;
        dhruvawani17
      &lt;/a&gt; / &lt;a href="https://github.com/dhruvawani17/video-ai" rel="noopener noreferrer"&gt;
        video-ai
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;AI medical Wellness Assistnant 🩺&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;&lt;strong&gt;Wellness Assistant&lt;/strong&gt; is a real-time, AI-powered Medical Wellness Video Assistant. It provides empathetic, non-diagnostic wellness insights by analyzing a user's physical, respiratory, and emotional markers through live video feed using multimodal AI models.&lt;/p&gt;

&lt;p&gt;Built with &lt;strong&gt;FastAPI&lt;/strong&gt;, &lt;strong&gt;vision-agents&lt;/strong&gt;, and &lt;strong&gt;WebSockets&lt;/strong&gt;, VitalsAI acts as a proactive wellness companion, capable of observing posture, estimating breathing patterns, and providing instant, conversational voice feedback.&lt;/p&gt;




&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;✨ Features&lt;/h2&gt;
&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time Video Analysis:&lt;/strong&gt; Uses WebRTC and WebSockets to process live camera feeds.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Posture &amp;amp; Kinematics Assessment:&lt;/strong&gt; Leverages YOLOv11 (&lt;code&gt;yolo11n-pose.pt&lt;/code&gt;) to detect spinal alignment, shoulder symmetry, and physical strain.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal AI Companion:&lt;/strong&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Vision:&lt;/strong&gt; Google Gemini &amp;amp; Ultralytics for visual reasoning and pose estimation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Speech-to-Text:&lt;/strong&gt; Deepgram for real-time transcription.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text-to-Speech:&lt;/strong&gt; ElevenLabs for a calm, clinical, and friendly voice assistant.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Live Dashboard:&lt;/strong&gt; Real-time insights displayed in a unified HTML/JS dashboard.&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Session Reports:&lt;/strong&gt; Automatically generates a downloadable PDF summary of the wellness session.&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;…&lt;/div&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/dhruvawani17/video-ai" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;



&lt;p&gt;&lt;br&gt;&lt;br&gt;
The code includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time video orchestration&lt;/li&gt;
&lt;li&gt;Pose estimation pipeline&lt;/li&gt;
&lt;li&gt;Multimodal AI reasoning logic&lt;/li&gt;
&lt;li&gt;Speech-to-text and text-to-speech integration&lt;/li&gt;
&lt;li&gt;PDF report generation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;This project is powered by a real-time multimodal AI architecture.&lt;/p&gt;

&lt;p&gt;Core stack:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Vision Agents SDK (agent orchestration layer)&lt;/li&gt;
&lt;li&gt;GetStream (WebRTC video communication)&lt;/li&gt;
&lt;li&gt;YOLO Pose Estimation (for skeletal keypoints)&lt;/li&gt;
&lt;li&gt;Gemini Multimodal LLM (reasoning over visual + text data)&lt;/li&gt;
&lt;li&gt;Deepgram (speech-to-text)&lt;/li&gt;
&lt;li&gt;ElevenLabs (text-to-speech)&lt;/li&gt;
&lt;li&gt;FastAPI (backend server)&lt;/li&gt;
&lt;li&gt;React (frontend interface)&lt;/li&gt;
&lt;li&gt;Vision Agents handled the event-driven coordination between video, audio, pose extraction, and AI reasoning — allowing me to focus on designing the wellness intelligence layer instead of managing low-level streaming and inference pipelines.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If a simple 10-second check-in can improve a student’s posture, focus, or stress awareness — it’s worth building.&lt;br&gt;
Technology should reduce friction, not add to it.&lt;br&gt;
This is my step toward making AI truly helpful.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>weekendchallenge</category>
      <category>showdev</category>
    </item>
    <item>
      <title>MedGuard: Secure Clinical Intelligence</title>
      <dc:creator>DHRUVA WANI</dc:creator>
      <pubDate>Sun, 15 Feb 2026 08:04:04 +0000</pubDate>
      <link>https://dev.to/dhruva_wani_17/medguard-secure-clinical-intelligence-1ei5</link>
      <guid>https://dev.to/dhruva_wani_17/medguard-secure-clinical-intelligence-1ei5</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/github-2026-01-21"&gt;GitHub Copilot CLI Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;I built MedGuard, a secure clinical intelligence platform that bridges the gap between patient privacy and the power of modern AI.&lt;/p&gt;

&lt;p&gt;In the medical field, seconds matter, but so does privacy. Doctors are often stuck between outdated software and powerful AI tools they can't legally use due to HIPAA and GDPR regulations. I wanted to solve this paradox: How can we give doctors access to state-of-the-art LLMs in real-time without ever exposing patient data?&lt;/p&gt;

&lt;p&gt;MedGuard is the answer. It is a "Zero-Trust" AI middleware that acts as a firewall for clinical data.&lt;/p&gt;

&lt;p&gt;Here is how I architected the solution:&lt;/p&gt;

&lt;p&gt;The Privacy Firewall: I built a hybrid sanitization engine using Microsoft Presidio and custom Regex patterns. This layer automatically strips names, MRNs, and dates from PDF reports and scanned notes before they leave the hospital's local environment.&lt;/p&gt;

&lt;p&gt;The Speed of Cerebras: To make this viable for emergency rooms, I couldn't afford slow inference. I integrated the Cerebras Inference Cloud (Llama-3.3-70b), which allows MedGuard to analyze complex medical histories and generate triage recommendations in milliseconds, not seconds.&lt;/p&gt;

&lt;p&gt;Governance via Archestra: I didn't want a "hallucinating" AI. I used Archestra as my central orchestrator to manage BioMCP (Bio-Medical Control Protocol). Archestra ensures that every AI response is grounded in verified medical protocols (like OpenFDA and AHA guidelines) and monitors the system for data exfiltration attempts and token costs.&lt;/p&gt;

&lt;p&gt;What it means to me:&lt;br&gt;
Building MedGuard wasn't just about connecting APIs; it was about proving that we don't have to compromise on security to innovate in healthcare. By combining the raw speed of Cerebras with the governance of Archestra, I’ve created a prototype that demonstrates how AI can be safely deployed in sensitive industries today.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://huggingface.co/spaces/dhruvawani17/medguardpro" rel="noopener noreferrer"&gt;https://huggingface.co/spaces/dhruvawani17/medguardpro&lt;/a&gt;&lt;br&gt;
&lt;a href="https://youtu.be/9EX1pynXZKc" rel="noopener noreferrer"&gt;https://youtu.be/9EX1pynXZKc&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  My Experience with GitHub Copilot CLI
&lt;/h2&gt;

&lt;p&gt;Using GitHub Copilot CLI transformed my terminal from a simple command executor into an intelligent pair programmer. Instead of constantly context-switching between my code editor and browser documentation, I could stay in the flow and resolve complex infrastructure challenges directly in the command line.&lt;/p&gt;

&lt;p&gt;Key ways it impacted my development:&lt;/p&gt;

&lt;p&gt;Taming the Regex Beast: Writing robust Regular Expressions for PII redaction is notoriously difficult and error-prone. I used Copilot CLI to generate precise patterns for catching medical record numbers, varying date formats (e.g., "12/05/1984" vs "Feb 14, 2026"), and email addresses. A simple query like ?? "regex python to match medical record numbers and dates" gave me a solid foundation that I could immediately integrate into my redact_pii function.&lt;/p&gt;

&lt;p&gt;Streamlining Docker Deployment: Deploying a Python app with system-level dependencies like Tesseract and Poppler is tricky. When my build failed due to missing Linux libraries (libgl1), Copilot CLI was invaluable. I could ask ?? "how to install tesseract and poppler in python slim docker image" and it suggested the correct apt-get commands and the switch to python:3.9-slim-bookworm, saving me hours of debugging "dependency hell."&lt;/p&gt;

&lt;p&gt;Rapid Prototyping: For the Streamlit UI, Copilot CLI helped me scaffold the layout commands quickly. I used it to remember the syntax for complex Streamlit widgets like st.data_editor and column layouts without needing to dig through the docs.&lt;/p&gt;

&lt;p&gt;Impact:&lt;br&gt;
Copilot CLI didn't just write code; it acted as a DevOps engineer and a Regex specialist. It significantly reduced my debugging time, allowing me to focus on the core logic of MedGuard—security and clinical accuracy—rather than getting bogged down in syntax and configuration errors.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>githubchallenge</category>
      <category>cli</category>
      <category>githubcopilot</category>
    </item>
    <item>
      <title>The Agentic Developer: Orchestrating My 2026 Portfolio with Google Antigravity &amp; Gemini 3</title>
      <dc:creator>DHRUVA WANI</dc:creator>
      <pubDate>Mon, 26 Jan 2026 18:52:30 +0000</pubDate>
      <link>https://dev.to/dhruva_wani_17/the-agentic-developer-orchestrating-my-2026-portfolio-with-google-antigravity-gemini-3-48o0</link>
      <guid>https://dev.to/dhruva_wani_17/the-agentic-developer-orchestrating-my-2026-portfolio-with-google-antigravity-gemini-3-48o0</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/new-year-new-you-google-ai-2025-12-31"&gt;New Year, New You Portfolio Challenge Presented by Google AI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About Me
&lt;/h2&gt;

&lt;p&gt;Hi! I'm a developer based in Mumbai with a deep passion for AI. Lately, I've been exploring the intersection of web development and Artificial Intelligence. I love trying new AI tools to accelerate my workflow and complete tasks at a faster pace.&lt;/p&gt;

&lt;p&gt;I am the author of "The Secrets To Master Your Mind", which I published at the age of 14. Currently, I am enrolled as a student at K.J. Somaiya Institute of Technology.&lt;/p&gt;

&lt;h2&gt;
  
  
  Portfolio
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://portfolio-qos4vvsi3a-uc.a.run.app" rel="noopener noreferrer"&gt;Experience it fully here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://portfolio-qos4vvsi3a-uc.a.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;p&gt;🛠️ The Tech Stack&lt;br&gt;
I wanted my portfolio to be more than just a static page—I wanted it to be an immersive, "liquid" experience. To achieve this, I used a modern, performance-focused stack, orchestrated entirely by Google Antigravity:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frontend Core&lt;/strong&gt;: React 19 (via Vite 7) for ultra-fast HMR and build times.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Styling&lt;/strong&gt;: Tailwind CSS 4 &amp;amp; PostCSS for rapid UI development.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure&lt;/strong&gt;: Docker (Multi-stage build) &amp;amp; Nginx.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hosting&lt;/strong&gt;: Google Cloud Run (Serverless container deployment).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🎨 Immersive Design &amp;amp; Animations&lt;/strong&gt;&lt;br&gt;
To create a "premium" feel, I layered several animation libraries:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Framer Motion&lt;/strong&gt;: Used for complex component animations and scroll-triggered layout reveals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GSAP&lt;/strong&gt;: Powered high-performance tweens and timelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Locomotive Scroll&lt;/strong&gt;: Enabled smooth, inertia-based scrolling to give the site weight and momentum.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WebGL &amp;amp; Shaders&lt;/strong&gt;: I implemented a custom fluid simulation (SplashCursor.jsx) and particle effects (@tsparticles/react) to create a background that reacts to user interaction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;☁️ Powered by Google Cloud &amp;amp; AI&lt;/strong&gt;&lt;br&gt;
This project relies heavily on the Google ecosystem for both development and deployment:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Google Cloud Run&lt;/strong&gt;: I containerized the application using Docker. The Dockerfile uses a multi-stage build (Node 18 for building → Nginx Alpine for serving) to keep the image lightweight. Deploying to Cloud Run was seamless, allowing me to scale to zero when not in use (keeping costs low) while maintaining high availability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gemini &amp;amp; AI Assistance&lt;/strong&gt;: As a Google Student Ambassador, I leverage Google's tools daily. For this project, I used Gemini 3 Pro (via Google Antigravity) to build the entire website from scratch. I generated the UI elements, animations, and styling simply by providing prompts to the Antigravity agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm Most Proud Of
&lt;/h2&gt;

&lt;p&gt;I am most proud of finally breaking my cycle of procrastination. I had always put off building my portfolio, but this challenge proved to be the perfect platform to get started. With the help of &lt;strong&gt;Google AI&lt;/strong&gt; tools, I finally completed it.&lt;/p&gt;

&lt;p&gt;The animations generated by Antigravity were outstanding and went far beyond my imagination.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>portfolio</category>
      <category>gemini</category>
    </item>
  </channel>
</rss>
