<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ashwin Mehta</title>
    <description>The latest articles on DEV Community by Ashwin Mehta (@ashwin_mehta).</description>
    <link>https://dev.to/ashwin_mehta</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ashwin_mehta"/>
    <language>en</language>
    <item>
      <title>Meme for this week</title>
      <dc:creator>Ashwin Mehta</dc:creator>
      <pubDate>Thu, 15 Jan 2026 15:38:00 +0000</pubDate>
      <link>https://dev.to/ashwin_mehta/meme-for-this-week-4a3i</link>
      <guid>https://dev.to/ashwin_mehta/meme-for-this-week-4a3i</guid>
      <description></description>
      <category>discuss</category>
      <category>socialmedia</category>
    </item>
    <item>
      <title>Stop Chatting, Start Building: A Developer’s Guide to Google AI Studio</title>
      <dc:creator>Ashwin Mehta</dc:creator>
      <pubDate>Wed, 07 Jan 2026 19:42:20 +0000</pubDate>
      <link>https://dev.to/ashwin_mehta/stop-chatting-start-building-a-developers-guide-togoogle-ai-studio-88a</link>
      <guid>https://dev.to/ashwin_mehta/stop-chatting-start-building-a-developers-guide-togoogle-ai-studio-88a</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
We’ve all been there. You’re building a feature, you open ChatGPT or Claude, you paste in your requirements, you get some code, and then you copy-paste it back into your IDE.It works, but it’s manual. It’s brittle. And it’s hard to automate.If you are a developer, you need to stop using consumer chatbots for your workflow and start.&lt;br&gt;
using Google AI Studio. It is arguably the most underrated tool in the AI stack right now—effectively an IDE for prompt engineering that hands you API-ready code on a silver platter.&lt;br&gt;
Here is how to go from a vague idea to a running Python script in less than 5 minutes&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1 Why Google AI Studio?&lt;/strong&gt;&lt;br&gt;
Before we dive in, why switch?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It’s Fast: The ”Flash” models (Gemini 1.5 and the new 2.5 Flash) are incredibly fast and
cheap.&lt;/li&gt;
&lt;li&gt;Huge Context: You can paste entire codebases or hour-long videos into the prompt window (1M+ tokens).&lt;/li&gt;
&lt;li&gt;The ”Get Code” Button: This is the killer feature. One click converts your playground session into Python, JavaScript, or cURL.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;2 Step 1: The Setup (No Credit Card Required)&lt;br&gt;
Go to &lt;a href="//aistudio.google.com."&gt;AI-Studio&lt;/a&gt; You can sign in with your standard Google account.&lt;br&gt;
You’ll see an interface that looks like a chatbot, but with more knobs and dials.&lt;br&gt;
• Left Panel: Your history and prompt library.&lt;br&gt;
• Middle: The prompt interface (Chat, Freeform, or Structured).&lt;br&gt;
• Right Panel: Model settings (Temperature, Safety settings).&lt;br&gt;
Pro Tip: Select Gemini 2.5 Flash (or the latest Flash model available). It is the perfect balance of intelligence and speed for most dev tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3 Step 2: Structure Your Prompt with ”System Instructions”&lt;/strong&gt;&lt;br&gt;
In a standard chat app, you have to constantly remind the bot: ”You are a senior Python engineer,&lt;br&gt;
don’t give me explanations, just code.”&lt;br&gt;
In AI Studio, you set this once in the ”System Instructions” box at the top left.&lt;br&gt;
Example System Instruction:&lt;br&gt;
”You are a rigid data extraction assistant. You only output valid JSON. You never explain your work. If data is missing, use null.”&lt;br&gt;
Now, every message you send will adhere to these rules automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4 Step 3: The ”Get Code” Workflow&lt;/strong&gt;&lt;br&gt;
Let’s build a simple tool: A Jargon Buster that takes complex tech paragraphs and simplifies them for a non-technical manager.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set your System Instruction: ”You are a technical translator. Rewrite the input text to be
understood by a non-technical PM.”&lt;/li&gt;
&lt;li&gt;Test it: Type ”The K8s pod crashlooped because the OOMKiller terminated the container.”
→ Result: ”The server kept restarting because it ran out of memory.”&lt;/li&gt;
&lt;li&gt;Export it: Look for the ”Get Code” button (usually top right, near the ”Run” button).
Click it, and select Python. You will get something like this:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;google&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;genai&lt;/span&gt;

&lt;span class="c1"&gt;# Make sure to set your GEMINI_API_KEY environment variable
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;genai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;GEMINI_API_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate_content&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemini-2.5-flash&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system_instruction&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a technical translator. Rewrite the input text to be understood by a non-technical PM.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;contents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The K8s pod crashlooped because the OOMKiller terminated the container.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;5 Advanced Feature: Structured Outputs ( JSON Mode)&lt;/strong&gt;&lt;br&gt;
This is where AI Studio separates itself from the pack. If you are building an app, you don’t want&lt;br&gt;
text; you want JSON.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click the plus (+) icon or look for ”Structured Prompt” options.&lt;/li&gt;
&lt;li&gt;Define your Schema. You can literally tell it: ”I want an object with sentiment (enum:positive, negative) and keywords (list of strings).”&lt;/li&gt;
&lt;li&gt;Gemini is now forced to follow this structure. It cannot hallucinate a new key or give you a conversational intro.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;6 Practical Use Cases&lt;/strong&gt;&lt;br&gt;
Here are three things I’ve built using this exact workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;PR Summarizer: A script that reads a git diff and generates a bulleted summary for the Pull Request description.&lt;/li&gt;
&lt;li&gt;Error Log Analyzer: I paste a stack trace, and the model outputs the file name and line number of the likely culprit in JSON format.&lt;/li&gt;
&lt;li&gt;Meeting Notes to Tickets: I drop an audio file of a standup meeting into AI Studio (yes,it accepts audio!) and ask it to extract ”Action Items” as a list.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;7 Conclusion&lt;/strong&gt;&lt;br&gt;
The gap between ”using AI” and ”building with AI” is smaller than you think. Google AI Studio bridges that gap by letting you prototype visually and export programmatically.Stop writing your prompt templates from scratch. Build them in the Studio, click ”Get Code,”and ship it.&lt;/p&gt;

</description>
      <category>googleaichallenge</category>
      <category>googlecloud</category>
      <category>googleaistudio</category>
      <category>aifordevelopers</category>
    </item>
    <item>
      <title>Google Nano Banana: How Prompt Structure Changes AI Image Results</title>
      <dc:creator>Ashwin Mehta</dc:creator>
      <pubDate>Tue, 30 Dec 2025 14:09:42 +0000</pubDate>
      <link>https://dev.to/ashwin_mehta/google-nano-banana-how-prompt-structure-changes-ai-image-results-488l</link>
      <guid>https://dev.to/ashwin_mehta/google-nano-banana-how-prompt-structure-changes-ai-image-results-488l</guid>
      <description>&lt;p&gt;Introduction&lt;br&gt;
While experimenting with Google’s Nano model (popularly called Nano Banana 🍌), I realized something interesting:&lt;/p&gt;

&lt;p&gt;AI image quality doesn’t depend only on the model—it heavily depends on how you prompt it.&lt;/p&gt;

&lt;p&gt;In this post, I’ll share a simple prompting framework I learned that makes AI-generated images more controlled, expressive, and realistic, even for beginners.&lt;/p&gt;

&lt;p&gt;This blog is written from a learning-by-doing perspective, not a theoretical one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Is Google Nano Banana?&lt;/strong&gt;&lt;br&gt;
Google Nano Banana is a lightweight multimodal AI model that focuses on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Image understanding&lt;/li&gt;
&lt;li&gt;Reasoning-based generation&lt;/li&gt;
&lt;li&gt;Predicting what happens next instead of just static outputs
The real power comes from structured prompts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The 5-Step Prompt Formula (Core Learning)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Through experimentation, I found that breaking prompts into components dramatically improves results.&lt;/p&gt;

&lt;p&gt;The 5 Key Prompt Elements&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Subject – Who or what is in the image&lt;/li&gt;
&lt;li&gt;Action – What the subject is doing&lt;/li&gt;
&lt;li&gt;Scene – Where it happens&lt;/li&gt;
&lt;li&gt;Style – Visual aesthetic or era&lt;/li&gt;
&lt;li&gt;Composition – Camera angle or framing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F435nvug7sl9gr3yyvaq0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F435nvug7sl9gr3yyvaq0.png" alt=" " width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example Prompt - Create an image of me (subject) laughing (action) &lt;br&gt;
in a 1960s café (scene).Make it a close-up shot in a vintage photography style (composition and style).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Going Beyond Static Images: “What If” Reasoning&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the coolest things about Nano Banana is reasoning-based continuation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Set a clear stage&lt;/strong&gt;&lt;br&gt;
Generate an image of a person standing and holding a 3-tier cake.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49dgja4dwlnbr2ifs9ao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F49dgja4dwlnbr2ifs9ao.png" alt=" " width="599" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Trigger an action&lt;/strong&gt;&lt;br&gt;
Now generate an image showing what would happen if they tripped.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfq664g9w5zrz0ad2e7y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfq664g9w5zrz0ad2e7y.png" alt=" " width="599" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The model doesn’t just redraw—it predicts the next logical outcome, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Body posture&lt;/li&gt;
&lt;li&gt;Object movement&lt;/li&gt;
&lt;li&gt;Environmental reaction
This feels closer to storytelling, not image generation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What I Learned from This Experiment
&lt;/h3&gt;

&lt;p&gt;Key Takeaways&lt;br&gt;
AI models perform better with structured context “What if” prompts unlock reasoning ability Prompting is becoming a skill, not just typing text&lt;br&gt;
Composition matters as much as description&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Common Mistakes Beginners Make&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing very long, unstructured prompts&lt;/li&gt;
&lt;li&gt;Mixing multiple scenes at once&lt;/li&gt;
&lt;li&gt;Ignoring camera composition&lt;/li&gt;
&lt;li&gt;Expecting AI to “guess” intent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best Practices for Prompting&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Think like a director, not a user&lt;/li&gt;
&lt;li&gt;Separate what, where, and how&lt;/li&gt;
&lt;li&gt;Add actions to make images dynamic&lt;/li&gt;
&lt;li&gt;Test small changes and iterate&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>googlecloud</category>
      <category>gemini</category>
      <category>nanobanana</category>
      <category>promptengineering</category>
    </item>
  </channel>
</rss>
