<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aman Bhargav</title>
    <description>The latest articles on DEV Community by Aman Bhargav (@aman_bhargav_1f85e63584bc).</description>
    <link>https://dev.to/aman_bhargav_1f85e63584bc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aman_bhargav_1f85e63584bc"/>
    <language>en</language>
    <item>
      <title>After Testing Gemma 4, I Finally Understand The Local AI Hype</title>
      <dc:creator>Aman Bhargav</dc:creator>
      <pubDate>Mon, 11 May 2026 06:35:57 +0000</pubDate>
      <link>https://dev.to/aman_bhargav_1f85e63584bc/after-testing-gemma-4-i-finally-understand-the-local-ai-hype-2ggo</link>
      <guid>https://dev.to/aman_bhargav_1f85e63584bc/after-testing-gemma-4-i-finally-understand-the-local-ai-hype-2ggo</guid>
      <description>&lt;p&gt;I used to think local AI was mostly useful for demos and small experiments.&lt;/p&gt;

&lt;p&gt;Then I spent a few days testing Gemma 4 on my laptop.&lt;/p&gt;

&lt;p&gt;The interesting part wasn’t benchmark numbers. It was how usable the model actually felt during normal development work.&lt;/p&gt;

&lt;p&gt;I tested it with:&lt;/p&gt;

&lt;p&gt;Rails services&lt;br&gt;
old migrations&lt;br&gt;
background jobs&lt;br&gt;
messy business logic&lt;/p&gt;

&lt;p&gt;and it handled repository context better than I expected from a local model.&lt;/p&gt;

&lt;p&gt;It still struggles with bigger architectural decisions and long autonomous tasks, but debugging and code understanding were surprisingly solid.&lt;/p&gt;

&lt;p&gt;The biggest takeaway for me:&lt;br&gt;
local models are finally becoming practical enough that I’d actually keep one running during daily work.&lt;/p&gt;

</description>
      <category>ai</category>
    </item>
    <item>
      <title>Gemma 4 Feels Different From Most Open Models I’ve Tested</title>
      <dc:creator>Aman Bhargav</dc:creator>
      <pubDate>Mon, 11 May 2026 06:35:33 +0000</pubDate>
      <link>https://dev.to/aman_bhargav_1f85e63584bc/gemma-4-feels-different-from-most-open-models-ive-tested-1pbg</link>
      <guid>https://dev.to/aman_bhargav_1f85e63584bc/gemma-4-feels-different-from-most-open-models-ive-tested-1pbg</guid>
      <description>&lt;p&gt;Most local models impress me for about 10 minutes.&lt;/p&gt;

&lt;p&gt;Then the context starts breaking, responses become repetitive, and debugging turns into prompt wrestling.&lt;/p&gt;

&lt;p&gt;Gemma 4 was the first open model where I didn’t hit that wall immediately.&lt;/p&gt;

&lt;p&gt;I tested it against a real Rails codebase instead of toy examples, and it was surprisingly good at:&lt;/p&gt;

&lt;p&gt;tracing Sidekiq flows&lt;br&gt;
finding duplicated logic&lt;br&gt;
explaining legacy code&lt;br&gt;
spotting missing indexes&lt;/p&gt;

&lt;p&gt;The reasoning mode especially made the responses feel less like autocomplete and more like actual step-by-step analysis.&lt;/p&gt;

&lt;p&gt;Not perfect.&lt;br&gt;
Still weaker than larger cloud models.&lt;/p&gt;

&lt;p&gt;But honestly, much more practical than I expected from a local setup.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>developer</category>
      <category>gemmachallenge</category>
      <category>devchallenge</category>
    </item>
    <item>
      <title>I Didn’t Expect Gemma 4 To Be This Good Locally</title>
      <dc:creator>Aman Bhargav</dc:creator>
      <pubDate>Mon, 11 May 2026 06:34:48 +0000</pubDate>
      <link>https://dev.to/aman_bhargav_1f85e63584bc/i-didnt-expect-gemma-4-to-be-this-good-locally-42di</link>
      <guid>https://dev.to/aman_bhargav_1f85e63584bc/i-didnt-expect-gemma-4-to-be-this-good-locally-42di</guid>
      <description>&lt;p&gt;I’ve tested a lot of local models recently, and honestly most of them start struggling once you give them real coding tasks instead of benchmark-style prompts.&lt;/p&gt;

&lt;p&gt;So I tried Gemma 4 with one of my Rails projects expecting the same thing.&lt;/p&gt;

&lt;p&gt;What surprised me most wasn’t the raw output quality. It was the consistency.&lt;/p&gt;

&lt;p&gt;I tested:&lt;/p&gt;

&lt;p&gt;Sidekiq debugging&lt;br&gt;
ActiveRecord query optimization&lt;br&gt;
serializer cleanup&lt;br&gt;
migration reviews&lt;/p&gt;

&lt;p&gt;and the model stayed usable much longer than I expected.&lt;/p&gt;

&lt;p&gt;Using:&lt;/p&gt;

&lt;p&gt;&amp;lt;|think|&amp;gt;&lt;/p&gt;

&lt;p&gt;made a noticeable difference too. The responses became slower but more structured, especially during debugging.&lt;/p&gt;

&lt;p&gt;It’s still not replacing larger cloud models for complex architecture work, but for local development workflows this feels much closer to practical than previous open models I’ve tried.&lt;/p&gt;

&lt;p&gt;Honestly, this is the first time local AI stopped feeling like just an experiment.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>gemmachallenge</category>
      <category>gemini</category>
      <category>devchallenge</category>
    </item>
  </channel>
</rss>
