<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rakesh Mondal</title>
    <description>The latest articles on DEV Community by Rakesh Mondal (@rakesh_cse_2004).</description>
    <link>https://dev.to/rakesh_cse_2004</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rakesh_cse_2004"/>
    <language>en</language>
    <item>
      <title>I Let Gemma 4 Read My Codebase at 3AM — Here's What Happened</title>
      <dc:creator>Rakesh Mondal</dc:creator>
      <pubDate>Fri, 08 May 2026 17:39:14 +0000</pubDate>
      <link>https://dev.to/rakesh_cse_2004/i-let-gemma-4-read-my-codebase-at-3am-heres-what-happened-229i</link>
      <guid>https://dev.to/rakesh_cse_2004/i-let-gemma-4-read-my-codebase-at-3am-heres-what-happened-229i</guid>
      <description>&lt;h1&gt;
  
  
  I Let Gemma 4 Read My Codebase at 3AM — Here's What Happened
&lt;/h1&gt;

&lt;p&gt;There's a specific kind of frustration that hits at 3AM.&lt;/p&gt;

&lt;p&gt;You have 47 open tabs. A bug that shouldn't exist. And a cloud AI bill&lt;br&gt;
that's climbing faster than your caffeine intake.&lt;/p&gt;

&lt;p&gt;That night, I stopped sending requests to the cloud. I pulled&lt;br&gt;
&lt;strong&gt;Gemma 4&lt;/strong&gt; locally, pointed it at my codebase, and asked it a&lt;br&gt;
question I'd been afraid to ask any AI out loud:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"What's wrong with how I've structured this entire project?"&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;What came back wasn't a compliment. It was a diagnosis.&lt;/p&gt;

&lt;p&gt;That's when I knew Gemma 4 was different.&lt;/p&gt;


&lt;h2&gt;
  
  
  What Even Is Gemma 4? (The Part Nobody Explains Clearly)
&lt;/h2&gt;

&lt;p&gt;Most articles will throw a spec sheet at you. I won't.&lt;/p&gt;

&lt;p&gt;Here's the honest version:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gemma 4&lt;/strong&gt; is Google's open-weight model family — meaning the weights&lt;br&gt;
are yours. You can run it on your machine, fine-tune it on your data,&lt;br&gt;
ship it inside your product, and never send a single token to a&lt;br&gt;
third-party server.&lt;/p&gt;

&lt;p&gt;The 2026 release brought four variants into the real world:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Parameters&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;gemma-4-it-2b&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;2B&lt;/td&gt;
&lt;td&gt;Edge devices, fast inference&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;gemma-4-it-9b&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;9B&lt;/td&gt;
&lt;td&gt;Laptop/desktop, balanced power&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;gemma-4-it-27b&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;27B&lt;/td&gt;
&lt;td&gt;Workstation, near-frontier quality&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;gemma-4-pt-*&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;All sizes&lt;/td&gt;
&lt;td&gt;Fine-tuning your own domain&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code&gt;it&lt;/code&gt; means instruction-tuned. The &lt;code&gt;pt&lt;/code&gt; means pre-trained base.&lt;/p&gt;

&lt;p&gt;For most developers reading this — &lt;strong&gt;start with 9B&lt;/strong&gt;. It's the&lt;br&gt;
Goldilocks: smart enough to reason properly, small enough to run on a&lt;br&gt;
16GB MacBook without setting it on fire.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Setup Nobody Shows You (That Actually Works)
&lt;/h2&gt;

&lt;p&gt;I'm not going to give you a copy-paste Colab notebook.&lt;/p&gt;

&lt;p&gt;I'm going to tell you what I actually did on my development machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Requirements:&lt;/strong&gt; Python 3.10+, ~20GB disk space, 16GB RAM minimum&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Step 1: Install Ollama (the easiest local inference runtime)&lt;/span&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://ollama.com/install.sh | sh

&lt;span class="c"&gt;# Step 2: Pull Gemma 4 9B&lt;/span&gt;
ollama pull gemma4:9b

&lt;span class="c"&gt;# Step 3: Run it — that's literally it&lt;/span&gt;
ollama run gemma4:9b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within 4 minutes I had a running model on my laptop. No API key.&lt;br&gt;
No rate limits. No billing dashboard sending me anxiety emails.&lt;/p&gt;

&lt;p&gt;If you want to call it programmatically from Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ollama&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ollama&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;gemma4:9b&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Explain transformer attention in 3 lines for a junior dev.&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it. That's the whole integration.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Gemma 4 Is Surprisingly Good At
&lt;/h2&gt;

&lt;p&gt;Here's what I actually tested — not benchmarks, real developer tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Code Review With Actual Opinions
&lt;/h3&gt;

&lt;p&gt;I fed it a 200-line Python module and asked: &lt;em&gt;"What would you refactor&lt;br&gt;
and why?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It didn't just flag syntax. It pointed out that I was violating&lt;br&gt;
single-responsibility principle in two specific functions, suggested a&lt;br&gt;
strategy pattern for a switch-heavy block, and noted that my error&lt;br&gt;
handling was "optimistic to the point of being dangerous."&lt;/p&gt;

&lt;p&gt;That last phrase. An open model called my error handling &lt;em&gt;dangerous&lt;/em&gt;.&lt;br&gt;
I checked. It was right.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Explaining Concepts Without the Wikipedia Tone
&lt;/h3&gt;

&lt;p&gt;Ask it to explain backpropagation "like I'm a developer who never&lt;br&gt;
studied ML formally" and it actually adjusts. No textbook preamble.&lt;br&gt;
It starts with the thing you care about: &lt;em&gt;"Think of it as blame&lt;br&gt;
assignment — figuring out which weight caused the mistake."&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Generating Boilerplate That Doesn't Embarrass You
&lt;/h3&gt;

&lt;p&gt;I asked it for a FastAPI authentication module with JWT. It gave me&lt;br&gt;
working code, added comments explaining &lt;em&gt;why&lt;/em&gt; each security decision&lt;br&gt;
was made, and proactively told me what it deliberately left out and&lt;br&gt;
why.&lt;/p&gt;

&lt;p&gt;It has opinions. That's the difference.&lt;/p&gt;


&lt;h2&gt;
  
  
  Where It Struggles (Honest Review)
&lt;/h2&gt;

&lt;p&gt;I'd be doing you a disservice if I only sang praise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gemma 4 27B will challenge your hardware.&lt;/strong&gt; On a machine without a&lt;br&gt;
capable GPU, you're looking at slow inference that breaks the&lt;br&gt;
conversational rhythm. For heavy lifting, you need the right&lt;br&gt;
environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Very long context tasks degrade.&lt;/strong&gt; Feed it a 10,000-line codebase&lt;br&gt;
and ask questions about module relationships — the coherence drops&lt;br&gt;
towards the end of the context window. It's improving, but this is&lt;br&gt;
real.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It's not GPT-4 class at reasoning chains.&lt;/strong&gt; Complex multi-step&lt;br&gt;
mathematical proofs or deeply layered logical puzzles — the 9B model&lt;br&gt;
makes confident mistakes. The 27B is significantly better, but there's&lt;br&gt;
still a gap versus frontier closed models.&lt;/p&gt;

&lt;p&gt;Know what you're using it for. Don't use a scalpel to cut a tree.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Thing That Actually Matters: It's Yours
&lt;/h2&gt;

&lt;p&gt;I want to stop and say something that the spec sheets miss.&lt;/p&gt;

&lt;p&gt;When I ran Gemma 4 locally, I sent it my actual database schema.&lt;br&gt;
My actual API architecture. Conversations about real design decisions&lt;br&gt;
in a real product.&lt;/p&gt;

&lt;p&gt;With cloud AI, every one of those prompts travels somewhere.&lt;br&gt;
Gets logged somewhere. Possibly trains something somewhere.&lt;/p&gt;

&lt;p&gt;With Gemma 4, that conversation stayed on my machine.&lt;/p&gt;

&lt;p&gt;For indie developers, for students building real projects, for&lt;br&gt;
engineers at companies with data policies — &lt;strong&gt;ownership of inference&lt;br&gt;
is not a small thing&lt;/strong&gt;. It's the whole thing.&lt;/p&gt;


&lt;h2&gt;
  
  
  Fine-Tuning: When The Base Model Isn't Enough
&lt;/h2&gt;

&lt;p&gt;If the base Gemma 4 doesn't know your domain deeply enough — you can&lt;br&gt;
teach it.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;pt&lt;/code&gt; (pre-trained) variants are designed exactly for this. Using&lt;br&gt;
&lt;strong&gt;QLoRA&lt;/strong&gt; (Quantized Low-Rank Adaptation), you can fine-tune on a&lt;br&gt;
single consumer GPU:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;transformers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AutoModelForCausalLM&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;AutoTokenizer&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;peft&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;LoraConfig&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;get_peft_model&lt;/span&gt;

&lt;span class="n"&gt;model_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;google/gemma-4-9b-pt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;tokenizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;AutoTokenizer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_pretrained&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;AutoModelForCausalLM&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_pretrained&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;load_in_4bit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;      &lt;span class="c1"&gt;# QLoRA quantization
&lt;/span&gt;    &lt;span class="n"&gt;device_map&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;auto&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;lora_config&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LoraConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;lora_alpha&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;32&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;target_modules&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;q_proj&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;v_proj&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;lora_dropout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.05&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;bias&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;none&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;task_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;CAUSAL_LM&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;get_peft_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;lora_config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print_trainable_parameters&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="c1"&gt;# trainable params: 41,943,040 — about 0.5% of total weights
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You're not retraining the whole model. You're teaching it a dialect.&lt;br&gt;
Your codebase's patterns. Your documentation's tone. Your domain's&lt;br&gt;
vocabulary.&lt;/p&gt;

&lt;p&gt;That's genuinely powerful.&lt;/p&gt;


&lt;h2&gt;
  
  
  Which Variant Should You Use? (My Decision Tree)
&lt;/h2&gt;

&lt;p&gt;Are you building for edge / mobile?&lt;br&gt;
└─ YES → gemma-4-it-2b&lt;br&gt;
Do you have a consumer GPU (RTX 3060+)?&lt;br&gt;
└─ YES → gemma-4-it-9b  ← start here for most projects&lt;br&gt;
Do you have a workstation GPU (A100, H100, RTX 4090)?&lt;br&gt;
└─ YES → gemma-4-it-27b&lt;br&gt;
Do you need domain specialization?&lt;br&gt;
└─ YES → gemma-4-pt-[size] + QLoRA fine-tuning&lt;/p&gt;

&lt;p&gt;Don't over-engineer the decision. Run 9B. If it surprises you,&lt;br&gt;
you're done. If it disappoints you, scale up.&lt;/p&gt;


&lt;h2&gt;
  
  
  What Open-Source Models at This Level Mean for Us
&lt;/h2&gt;

&lt;p&gt;I've been a developer for long enough to remember when "run AI&lt;br&gt;
locally" meant a bad chatbot with a 5-word vocabulary.&lt;/p&gt;

&lt;p&gt;Gemma 4 isn't that.&lt;/p&gt;

&lt;p&gt;It's a model that a solo developer — with no enterprise contract, no&lt;br&gt;
research budget, no special access — can run, fine-tune, deploy, and&lt;br&gt;
own completely. That is a structural shift in who gets to build with&lt;br&gt;
AI.&lt;/p&gt;

&lt;p&gt;The frontier is moving fast. But the open-source ecosystem is moving&lt;br&gt;
faster than most people realize.&lt;/p&gt;

&lt;p&gt;Gemma 4 isn't trying to beat GPT-5. It's trying to be the model that&lt;br&gt;
10 million developers actually use, modify, and ship. And honestly?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It might already be winning that race.&lt;/strong&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Try This Tonight
&lt;/h2&gt;

&lt;p&gt;Don't just read this. Do something.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ollama pull gemma4:9b
ollama run gemma4:9b &lt;span class="s2"&gt;"Review this code and be honest: [paste any function you wrote this week]"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then come back here and leave a comment telling me what it said.&lt;/p&gt;

&lt;p&gt;I want to know if your code got called dangerous too.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Written by a developer who was tired of API bills and started asking&lt;br&gt;
better questions locally.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;All code tested on: MacBook Pro M2 16GB, Ubuntu 22.04 with RTX 3080.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Best Main Video (Most Professional)&lt;/p&gt;

&lt;p&gt;Google Developers — What’s New in Gemma 4&lt;br&gt;
YouTube:(&lt;a href="https://www.youtube.com/watch?v=jZVBoFOJK-Q&amp;amp;utm_source=chatgpt.com" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=jZVBoFOJK-Q&amp;amp;utm_source=chatgpt.com&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>gemmachallenge</category>
      <category>gemma</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
