<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: akartit</title>
    <description>The latest articles on DEV Community by akartit (@akartit).</description>
    <link>https://dev.to/akartit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/akartit"/>
    <language>en</language>
    <item>
      <title>I Tested Every Gemma 4 Model Locally on My MacBook - What Actually Works</title>
      <dc:creator>akartit</dc:creator>
      <pubDate>Sat, 04 Apr 2026 10:06:32 +0000</pubDate>
      <link>https://dev.to/akartit/i-tested-every-gemma-4-model-locally-on-my-macbook-what-actually-works-3g2o</link>
      <guid>https://dev.to/akartit/i-tested-every-gemma-4-model-locally-on-my-macbook-what-actually-works-3g2o</guid>
      <description>&lt;p&gt;&lt;em&gt;Audio ASR in 3 languages, image understanding, full-stack app generation, coding, and agentic behavior -- all running on a MacBook M4 Pro with 24GB RAM.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Interactive version with playable audio, live charts, and the working React app:&lt;/strong&gt; &lt;a href="https://gemma4-benchmark.pages.dev" rel="noopener noreferrer"&gt;gemma4-benchmark.pages.dev&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/1UBpg6efjBs"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;




&lt;p&gt;Google just released Gemma 4 -- their new family of open-source multimodal models. Four sizes, Apache-2.0 licensed, supports text + image + audio.&lt;/p&gt;

&lt;p&gt;I spent a day testing every variant. Real audio files. Real images. Code that has to compile and run. Here is my honest report.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Gemma 4 Family
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;E2B&lt;/strong&gt; -- Dense 2.3B, Text/Image/Audio, 4 GB at 4-bit. Phones and edge.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;E4B&lt;/strong&gt; -- Dense 4.5B, Text/Image/Audio, 5.5 GB at 4-bit. Laptops.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;26B-A4B&lt;/strong&gt; -- MoE 4B active/26B total, Text/Image, 16-18 GB at 4-bit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;31B&lt;/strong&gt; -- Dense 31B, Text/Image, 17-20 GB at 4-bit. Maximum quality.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Speed Benchmarks
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4aqwtq5rh2rn0euwhsdh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4aqwtq5rh2rn0euwhsdh.png" alt="Speed benchmark chart" width="799" height="534"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ollama:&lt;/strong&gt; E2B &lt;strong&gt;95 tok/s&lt;/strong&gt; | E4B &lt;strong&gt;57 tok/s&lt;/strong&gt; | 26B ~2 tok/s (swap) | 31B won't fit&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unsloth MLX:&lt;/strong&gt; E2B 81 tok/s (3.6 GB) | E4B 49 tok/s (5.6 GB)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ollama is 15-20% faster. Unsloth MLX uses 40% less memory.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Audio ASR: 3 Languages
&lt;/h2&gt;

&lt;p&gt;Tested via Ollama OpenAI-compatible endpoint. Only E2B and E4B support audio.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lma66xvmiyldchg169w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8lma66xvmiyldchg169w.png" alt="Audio ASR quality comparison" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Listen to all test audio samples:&lt;/strong&gt; &lt;a href="https://gemma4-benchmark.pages.dev/audio_player.html" rel="noopener noreferrer"&gt;Audio Player&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  English ASR
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;E4B (1.0s):&lt;/strong&gt; Perfect transcription. Every word correct with punctuation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;E2B (2.8s):&lt;/strong&gt; Garbled -- missing words, no punctuation.&lt;/p&gt;

&lt;h3&gt;
  
  
  French ASR
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;E4B (1.6s):&lt;/strong&gt; Perfect transcription with all French accents correct.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;E2B (4.1s):&lt;/strong&gt; Fragmented, missing most of the sentence.&lt;/p&gt;

&lt;h3&gt;
  
  
  Arabic ASR
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;E4B (6.0s):&lt;/strong&gt; Perfect Arabic transcription -- every word correct.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;E2B (6.0s):&lt;/strong&gt; Garbled -- wrong words, disordered.&lt;/p&gt;

&lt;h3&gt;
  
  
  Speech Translation (E4B)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;French to English:&lt;/strong&gt; "All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience..."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Arabic to English:&lt;/strong&gt; "Hello, I am an artificial intelligence model. Today we will test speech recognition in the Arabic language..."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;E4B is dramatically better than E2B for audio across all 3 languages.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Image Understanding
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Test 1: Thai Temple -- Landmark Identification
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fai419sdnjabqsvybuuua.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fai419sdnjabqsvybuuua.jpg" alt="Thai Temple in Bangkok" width="800" height="523"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;E4B (54 tok/s):&lt;/strong&gt; Thailand, Bangkok, &lt;strong&gt;Wat Phra Kaew&lt;/strong&gt; (Temple of the Emerald Buddha) within the Grand Palace.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;E2B (88 tok/s):&lt;/strong&gt; Thailand, Bangkok, Grand Palace (less specific).&lt;/p&gt;

&lt;h3&gt;
  
  
  Test 2: AI-Generated Tokyo + Japanese OCR
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2ha05gz4bfen8spy062.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz2ha05gz4bfen8spy062.jpg" alt="AI-generated Tokyo street at night" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AI-generated with nano-banana / Gemini&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Both models correctly read Japanese kanji: &lt;strong&gt;新宿ラーメン通り&lt;/strong&gt; (Shinjuku Ramen Street)&lt;/p&gt;

&lt;h3&gt;
  
  
  Test 3: Venice Seagull
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcu1wze6ah7j1w7axabck.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcu1wze6ah7j1w7axabck.png" alt="Seagull in Venice" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;E4B:&lt;/strong&gt; "A magnificent seagull perches watchfully atop a sculpted pedestal. The backdrop is a rich study in contrasting architectural styles..."&lt;/p&gt;




&lt;h2&gt;
  
  
  Full-Stack App Generation
&lt;/h2&gt;

&lt;p&gt;E4B generated a 155-line working React + Tailwind Task Manager:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yx0ooils57xg8d1rj2b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9yx0ooils57xg8d1rj2b.png" alt="E4B Task Manager running in browser" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Try it live:&lt;/strong&gt; &lt;a href="https://gemma4-benchmark.pages.dev/task_manager.html" rel="noopener noreferrer"&gt;gemma4-benchmark.pages.dev/task_manager.html&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;E2B failed&lt;/strong&gt; -- code fragments instead of single file.&lt;/p&gt;




&lt;h2&gt;
  
  
  Coding: Compile and Run
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Script&lt;/th&gt;
&lt;th&gt;E2B&lt;/th&gt;
&lt;th&gt;E4B&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Fibonacci&lt;/td&gt;
&lt;td&gt;PASS&lt;/td&gt;
&lt;td&gt;PASS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sieve of Eratosthenes&lt;/td&gt;
&lt;td&gt;PASS&lt;/td&gt;
&lt;td&gt;PASS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;JSON processor&lt;/td&gt;
&lt;td&gt;PASS&lt;/td&gt;
&lt;td&gt;PASS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HTTP request&lt;/td&gt;
&lt;td&gt;PASS&lt;/td&gt;
&lt;td&gt;PASS&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;React single file&lt;/td&gt;
&lt;td&gt;FAIL&lt;/td&gt;
&lt;td&gt;PASS&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Agentic Multi-Step Reasoning
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19ayn5eme064mpvasv1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19ayn5eme064mpvasv1w.png" alt="Agentic radar chart" width="800" height="801"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6-step blog platform design. Both completed 6/6 steps. E4B output was 57% longer with more detail.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why 26B Fails on 24GB
&lt;/h2&gt;

&lt;p&gt;Community reports from &lt;a href="https://reddit.com/r/LocalLLaMA" rel="noopener noreferrer"&gt;r/LocalLLaMA&lt;/a&gt; suggest Gemma 4 has a KV cache memory issue (not verified on our hardware):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;31B at 262K context: ~22GB just for KV cache (on top of model)&lt;/li&gt;
&lt;li&gt;Google did not adopt KV-reducing techniques from Qwen 3.5&lt;/li&gt;
&lt;li&gt;Workaround: &lt;code&gt;--ctx-size 8192 --cache-type-k q4_0 --parallel 1&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Official Benchmarks
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uaf51v2nn8frsux78oi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uaf51v2nn8frsux78oi.png" alt="Official Google Benchmarks" width="800" height="668"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Final Verdict
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26fnu51wwcrea2tg3dhq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26fnu51wwcrea2tg3dhq.png" alt="Final verdict scorecard" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  E4B -- The Sweet Spot -- 8.5/10
&lt;/h3&gt;

&lt;p&gt;Perfect ASR in 3 languages. Working React app. Japanese OCR. 57 tok/s. 5.6 GB.&lt;/p&gt;

&lt;h3&gt;
  
  
  E2B -- Speed Demon -- 7/10
&lt;/h3&gt;

&lt;p&gt;95 tok/s. 3.6 GB. Python works. Audio garbled. Failed complex HTML gen.&lt;/p&gt;

&lt;h3&gt;
  
  
  26B-A4B -- Heartbreaker -- 2/10 on 24GB
&lt;/h3&gt;

&lt;p&gt;Amazing benchmarks (88.3% AIME). ~2 tok/s on 24GB. Needs 32GB+.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick Start
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;ollama
ollama pull gemma4:e4b
ollama run gemma4:e4b
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For 24GB MacBook: &lt;code&gt;ollama run gemma4:e4b&lt;/code&gt; is the answer.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Tested April 3, 2026. MacBook Pro M4 Pro, 24GB, macOS Sequoia.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Interactive version:&lt;/strong&gt; &lt;a href="https://gemma4-benchmark.pages.dev" rel="noopener noreferrer"&gt;gemma4-benchmark.pages.dev&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sources:&lt;/strong&gt; &lt;a href="https://ai.google.dev/gemma/docs/core/model_card_4" rel="noopener noreferrer"&gt;Google Model Card&lt;/a&gt; | &lt;a href="https://huggingface.co/blog/gemma4" rel="noopener noreferrer"&gt;HuggingFace Blog&lt;/a&gt; | &lt;a href="https://ollama.com/library/gemma4" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt; | &lt;a href="https://unsloth.ai/docs/models/gemma-4" rel="noopener noreferrer"&gt;Unsloth Guide&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>gemma</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How to Add Gemma 4 Models to OpenClaw (Fix Missing Model Error)</title>
      <dc:creator>akartit</dc:creator>
      <pubDate>Fri, 03 Apr 2026 19:37:42 +0000</pubDate>
      <link>https://dev.to/akartit/how-to-add-gemma-4-models-to-openclaw-fix-missing-model-error-1b3l</link>
      <guid>https://dev.to/akartit/how-to-add-gemma-4-models-to-openclaw-fix-missing-model-error-1b3l</guid>
      <description>&lt;p&gt;Google's Gemma 4 family just dropped — and it's a big deal. Open weights, Apache 2.0 license, multimodal, reasoning-capable, and free to use via the Gemini API. But if you're running OpenClaw as your AI assistant gateway, you'll hit a wall: &lt;strong&gt;OpenClaw doesn't have Gemma 4 in its built-in model catalog yet.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's how I fixed it in under 5 minutes — for both the &lt;strong&gt;31B Dense&lt;/strong&gt; and the &lt;strong&gt;26B MoE (A4B)&lt;/strong&gt; variants.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;After setting &lt;code&gt;google/gemma-4-31b-it&lt;/code&gt; as my default model in OpenClaw, running &lt;code&gt;openclaw models list&lt;/code&gt; showed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Model                          Input   Ctx   Auth  Tags
google/gemma-4-31b-it          -       -     -     default,configured,missing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That &lt;code&gt;missing&lt;/code&gt; tag means OpenClaw has the model name in config but no metadata — no input types, no context window size, no API protocol. It doesn't know &lt;em&gt;how&lt;/em&gt; to talk to it.&lt;/p&gt;

&lt;p&gt;Meanwhile, the raw API works fine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nv"&gt;MODEL_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"gemma-4-31b-it"&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="s2"&gt;"https://generativelanguage.googleapis.com/v1beta/models/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;MODEL_ID&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:streamGenerateContent?key=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GEMINI_API_KEY&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "contents": [{
      "role": "user",
      "parts": [{"text": "Hello, what model are you?"}]
    }],
    "generationConfig": {
      "thinkingConfig": { "thinkingLevel": "HIGH" }
    },
    "tools": [{ "googleSearch": {} }]
  }'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Works perfectly. The problem is purely on OpenClaw's side.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix: Register Custom Model Metadata
&lt;/h2&gt;

&lt;p&gt;OpenClaw's config schema supports a top-level &lt;code&gt;models&lt;/code&gt; block where you can inject model definitions that the built-in catalog doesn't have. You need to tell OpenClaw the API protocol, capabilities, and context window for each model.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Edit your OpenClaw config
&lt;/h3&gt;

&lt;p&gt;Open &lt;code&gt;~/.openclaw/openclaw.json&lt;/code&gt; and add a &lt;code&gt;models&lt;/code&gt; block at the top level:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"models"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"mode"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"merge"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"providers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"google"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"baseUrl"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"https://generativelanguage.googleapis.com/v1beta"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"models"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gemma-4-31b-it"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Gemma 4 31B IT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"api"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"google-generative-ai"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"reasoning"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"input"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"image"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"contextWindow"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;262144&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"maxTokens"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;131072&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"gemma-4-26b-a4b-it"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Gemma 4 26B A4B IT (MoE)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"api"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"google-generative-ai"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"reasoning"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"input"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"image"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"contextWindow"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;262144&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"maxTokens"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;262144&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What each field does
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;mode&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"merge"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Adds to the existing catalog instead of replacing it&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;baseUrl&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Google's v1beta endpoint&lt;/td&gt;
&lt;td&gt;Required by schema, even for the built-in Google provider&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;api&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;"google-generative-ai"&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Tells OpenClaw to use Google's native API protocol (not OpenAI-compatible)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;reasoning&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;true&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Both models support configurable thinking modes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;input&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;["text", "image"]&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Multimodal: text + image (variable aspect ratio &amp;amp; resolution)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;contextWindow&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;262144&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;262K context window (256K usable + overhead)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;maxTokens&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;131072&lt;/code&gt; / &lt;code&gt;262144&lt;/code&gt;
&lt;/td&gt;
&lt;td&gt;Max output tokens (31B: 131K, 26B MoE: 262K)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Step 2: Set up Google API auth
&lt;/h3&gt;

&lt;p&gt;Make sure you have a Google auth profile in OpenClaw. If you don't:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;openclaw models auth login
&lt;span class="c"&gt;# Select "google" provider, "api_key" mode&lt;/span&gt;
&lt;span class="c"&gt;# Paste your Gemini API key&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or set it via environment variable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;GEMINI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-key-here"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Validate, restart, verify
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Validate config&lt;/span&gt;
openclaw config validate

&lt;span class="c"&gt;# Restart the gateway&lt;/span&gt;
openclaw gateway restart

&lt;span class="c"&gt;# Check models&lt;/span&gt;
openclaw models list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should now see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Model                          Input      Ctx    Auth  Tags
google/gemma-4-31b-it          text+image 256k   yes   default,configured
google/gemma-4-26b-a4b-it      text+image 256k   yes   configured
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No more &lt;code&gt;missing&lt;/code&gt; tag!&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Set your default and test
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Set default model&lt;/span&gt;
openclaw models &lt;span class="nb"&gt;set &lt;/span&gt;google/gemma-4-31b-it

&lt;span class="c"&gt;# Test it&lt;/span&gt;
openclaw agent &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"Hello! What model are you?"&lt;/span&gt; &lt;span class="nt"&gt;--local&lt;/span&gt; &lt;span class="nt"&gt;--session-id&lt;/span&gt; &lt;span class="nb"&gt;test&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Response:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"I am Gemma 4, a large language model developed by Google DeepMind."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Auto-Patch Script: One Command to Fix Everything
&lt;/h2&gt;

&lt;p&gt;Don't want to edit JSON by hand? Save this script and run it — it patches your OpenClaw config automatically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="c"&gt;# openclaw-gemma4-patch.sh&lt;/span&gt;
&lt;span class="c"&gt;# Auto-patches OpenClaw config to add Gemma 4 models (31B Dense + 26B MoE)&lt;/span&gt;
&lt;span class="c"&gt;# Usage: chmod +x openclaw-gemma4-patch.sh &amp;amp;&amp;amp; ./openclaw-gemma4-patch.sh&lt;/span&gt;

&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-euo&lt;/span&gt; pipefail

&lt;span class="nv"&gt;CONFIG&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;OPENCLAW_CONFIG_PATH&lt;/span&gt;&lt;span class="k"&gt;:-&lt;/span&gt;&lt;span class="nv"&gt;$HOME&lt;/span&gt;&lt;span class="p"&gt;/.openclaw/openclaw.json&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;BACKUP&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;CONFIG&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.bak.&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt; +%s&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# --- Preflight checks ---&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; openclaw &amp;amp;&amp;gt;/dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Error: openclaw not found in PATH"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2
  &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi

if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; python3 &amp;amp;&amp;gt;/dev/null&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Error: python3 required for JSON patching"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2
  &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi

if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Error: OpenClaw config not found at &lt;/span&gt;&lt;span class="nv"&gt;$CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Run 'openclaw configure' first or set OPENCLAW_CONFIG_PATH"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2
  &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi

&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Backing up config to &lt;/span&gt;&lt;span class="nv"&gt;$BACKUP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BACKUP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# --- Patch the config ---&lt;/span&gt;
python3 &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;PYEOF&lt;/span&gt;&lt;span class="sh"&gt;'
import json, sys, os

config_path = os.environ.get("OPENCLAW_CONFIG_PATH",
    os.path.expanduser("~/.openclaw/openclaw.json"))

with open(config_path, "r") as f:
    config = json.load(f)

gemma4_models = [
    {
        "id": "gemma-4-31b-it",
        "name": "Gemma 4 31B IT",
        "api": "google-generative-ai",
        "reasoning": True,
        "input": ["text", "image"],
        "contextWindow": 262144,
        "maxTokens": 131072
    },
    {
        "id": "gemma-4-26b-a4b-it",
        "name": "Gemma 4 26B A4B IT (MoE)",
        "api": "google-generative-ai",
        "reasoning": True,
        "input": ["text", "image"],
        "contextWindow": 262144,
        "maxTokens": 262144
    }
]

if "models" not in config:
    config["models"] = {"mode": "merge", "providers": {}}
if "providers" not in config["models"]:
    config["models"]["providers"] = {}
if "google" not in config["models"]["providers"]:
    config["models"]["providers"]["google"] = {
        "baseUrl": "https://generativelanguage.googleapis.com/v1beta",
        "models": []
    }

google = config["models"]["providers"]["google"]
if "baseUrl" not in google:
    google["baseUrl"] = "https://generativelanguage.googleapis.com/v1beta"
if "models" not in google:
    google["models"] = []

existing_ids = {m["id"] for m in google["models"]}
added = []
for model in gemma4_models:
    if model["id"] not in existing_ids:
        google["models"].append(model)
        added.append(model["id"])

if "agents" in config and "defaults" in config["agents"]:
    defaults = config["agents"]["defaults"]
    if "models" not in defaults:
        defaults["models"] = {}
    for model in gemma4_models:
        key = f"google/{model['id']}"
        if key not in defaults["models"]:
            defaults["models"][key] = {}

ordered = {"models": config.pop("models")}
ordered.update(config)

with open(config_path, "w") as f:
    json.dump(ordered, f, indent=2)
    f.write("&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;")

if added:
    print(f"Added models: {', '.join(added)}")
else:
    print("Models already present, no changes needed")
&lt;/span&gt;&lt;span class="no"&gt;PYEOF

&lt;/span&gt;&lt;span class="c"&gt;# --- Validate ---&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Validating config..."&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt; openclaw config validate&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Validation failed! Restoring backup..."&lt;/span&gt;
  &lt;span class="nb"&gt;cp&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BACKUP&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$CONFIG&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Restored. Please check your config manually."&lt;/span&gt;
  &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi&lt;/span&gt;

&lt;span class="c"&gt;# --- Restart gateway ---&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Restarting gateway..."&lt;/span&gt;
openclaw gateway restart

&lt;span class="c"&gt;# --- Verify ---&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Verifying models..."&lt;/span&gt;
openclaw models list

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Done! Gemma 4 models are ready."&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Set your default with: openclaw models set google/gemma-4-31b-it"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Test with: openclaw agent -m 'Hello!' --local --session-id test-gemma"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Save it and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x openclaw-gemma4-patch.sh
./openclaw-gemma4-patch.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script is &lt;strong&gt;idempotent&lt;/strong&gt; — running it twice won't duplicate models. It backs up your config before patching and auto-rolls back if validation fails.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Both Models
&lt;/h2&gt;

&lt;p&gt;Once patched, verify both variants respond:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Test the 31B Dense model&lt;/span&gt;
openclaw models &lt;span class="nb"&gt;set &lt;/span&gt;google/gemma-4-31b-it
openclaw agent &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"What model are you? One sentence."&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--local&lt;/span&gt; &lt;span class="nt"&gt;--session-id&lt;/span&gt; test-31b &lt;span class="nt"&gt;--json&lt;/span&gt; | python3 &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"
import json,sys
r = json.load(sys.stdin)
print(f'Model: {r[&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;meta&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;][&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;agentMeta&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;][&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;model&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;]}')
print(f'Response: {r[&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;payloads&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;][0][&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;]}')
print(f'Time: {r[&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;meta&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;][&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;durationMs&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;]}ms')
"&lt;/span&gt;

&lt;span class="c"&gt;# Test the 26B MoE model&lt;/span&gt;
openclaw models &lt;span class="nb"&gt;set &lt;/span&gt;google/gemma-4-26b-a4b-it
openclaw agent &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"What model are you? One sentence."&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--local&lt;/span&gt; &lt;span class="nt"&gt;--session-id&lt;/span&gt; test-26b &lt;span class="nt"&gt;--json&lt;/span&gt; | python3 &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"
import json,sys
r = json.load(sys.stdin)
print(f'Model: {r[&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;meta&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;][&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;agentMeta&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;][&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;model&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;]}')
print(f'Response: {r[&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;payloads&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;][0][&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;text&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;]}')
print(f'Time: {r[&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;meta&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;][&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;durationMs&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;]}ms')
"&lt;/span&gt;

&lt;span class="c"&gt;# Switch back to your preferred default&lt;/span&gt;
openclaw models &lt;span class="nb"&gt;set &lt;/span&gt;google/gemma-4-31b-it
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Test via Raw curl (No OpenClaw)
&lt;/h2&gt;

&lt;p&gt;Verify your API key works before patching:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="c"&gt;# test-gemma4-api.sh — Quick API smoke test for both Gemma 4 models&lt;/span&gt;
&lt;span class="nb"&gt;set&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nt"&gt;-E&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt; &lt;span class="nt"&gt;-z&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$GEMINI_API_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;]&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;then
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Error: Set GEMINI_API_KEY first"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&amp;amp;2
  &lt;span class="nb"&gt;exit &lt;/span&gt;1
&lt;span class="k"&gt;fi

for &lt;/span&gt;MODEL &lt;span class="k"&gt;in &lt;/span&gt;gemma-4-31b-it gemma-4-26b-a4b-it&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"--- Testing &lt;/span&gt;&lt;span class="nv"&gt;$MODEL&lt;/span&gt;&lt;span class="s2"&gt; ---"&lt;/span&gt;
  curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="s2"&gt;"https://generativelanguage.googleapis.com/v1beta/models/&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;MODEL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;:generateContent?key=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GEMINI_API_KEY&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
      "contents": [{
        "role": "user",
        "parts": [{"text": "What model are you? Reply in one sentence."}]
      }],
      "generationConfig": {
        "thinkingConfig": { "thinkingLevel": "HIGH" }
      }
    }'&lt;/span&gt; | python3 &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"
import json,sys
r = json.load(sys.stdin)
for part in r.get('candidates',[{}])[0].get('content',{}).get('parts',[]):
    if 'text' in part:
        print(part['text'])
        break
"&lt;/span&gt; 2&amp;gt;/dev/null &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"FAILED"&lt;/span&gt;
  &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How I Diagnosed This
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;openclaw models list&lt;/code&gt;&lt;/strong&gt; — showed model as &lt;code&gt;missing&lt;/code&gt; (no metadata)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;openclaw config schema&lt;/code&gt;&lt;/strong&gt; — extracted the full JSON schema to find the exact format for custom model definitions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Found the supported &lt;code&gt;api&lt;/code&gt; protocols&lt;/strong&gt;: &lt;code&gt;google-generative-ai&lt;/code&gt;, &lt;code&gt;openai-completions&lt;/code&gt;, &lt;code&gt;anthropic-messages&lt;/code&gt;, &lt;code&gt;ollama&lt;/code&gt;, and others&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Added the config block&lt;/strong&gt;, validated with &lt;code&gt;openclaw config validate&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Restarted gateway&lt;/strong&gt; and confirmed with a test message&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  About Gemma 4
&lt;/h2&gt;

&lt;p&gt;Gemma 4 is Google's latest open model family with three architectures:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Parameters&lt;/th&gt;
&lt;th&gt;Active Params&lt;/th&gt;
&lt;th&gt;Context&lt;/th&gt;
&lt;th&gt;Memory (BF16)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Gemma 4 E2B&lt;/td&gt;
&lt;td&gt;Efficient&lt;/td&gt;
&lt;td&gt;2B effective&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;128K&lt;/td&gt;
&lt;td&gt;9.6 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemma 4 E4B&lt;/td&gt;
&lt;td&gt;Efficient&lt;/td&gt;
&lt;td&gt;4B effective&lt;/td&gt;
&lt;td&gt;—&lt;/td&gt;
&lt;td&gt;128K&lt;/td&gt;
&lt;td&gt;15 GB&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gemma 4 31B&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Dense&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;31B&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;31B&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;256K&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;58.3 GB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gemma 4 26B A4B&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;MoE&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;26B&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4B&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;256K&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;48 GB&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Key capabilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reasoning&lt;/strong&gt; with configurable thinking modes (OFF/LOW/MEDIUM/HIGH)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal&lt;/strong&gt;: text, image (all models), video &amp;amp; audio (E2B/E4B)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Function calling&lt;/strong&gt; for agentic workflows&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Native system prompt support&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Apache 2.0 license&lt;/strong&gt; — fully open for commercial use&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Free Tier Rate Limits (Gemini API)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;RPM (Requests/min)&lt;/th&gt;
&lt;th&gt;TPM (Tokens/min)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Gemma 4 26B&lt;/td&gt;
&lt;td&gt;3 / 15&lt;/td&gt;
&lt;td&gt;77 / Unlimited&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemma 4 31B&lt;/td&gt;
&lt;td&gt;3 / 15&lt;/td&gt;
&lt;td&gt;15.04K / Unlimited&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The 31B gets significantly more token throughput on free tier. Both work for experimentation and personal assistant use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  The General Pattern: Adding Any Unsupported Model
&lt;/h2&gt;

&lt;p&gt;This same approach works for &lt;strong&gt;any model&lt;/strong&gt; from &lt;strong&gt;any provider&lt;/strong&gt; that OpenClaw doesn't ship in its catalog:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Verify the API key works&lt;/strong&gt; outside OpenClaw (use curl)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Find the right &lt;code&gt;api&lt;/code&gt; protocol&lt;/strong&gt; from &lt;code&gt;openclaw config schema&lt;/code&gt; — options include:

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;google-generative-ai&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;openai-completions&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;openai-responses&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;anthropic-messages&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ollama&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;bedrock-converse-stream&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;github-copilot&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;azure-openai-responses&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add the &lt;code&gt;models.providers.&amp;lt;name&amp;gt;&lt;/code&gt; block&lt;/strong&gt; with &lt;code&gt;id&lt;/code&gt;, &lt;code&gt;api&lt;/code&gt;, &lt;code&gt;contextWindow&lt;/code&gt;, &lt;code&gt;maxTokens&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validate and restart&lt;/strong&gt;: &lt;code&gt;openclaw config validate &amp;amp;&amp;amp; openclaw gateway restart&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That's it. Sixty seconds from "unknown model" to a working agent.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Links:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.kaggle.com/models?query=gemma-4&amp;amp;publisher=google" rel="noopener noreferrer"&gt;Gemma 4 on Kaggle&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/collections/google/gemma-4" rel="noopener noreferrer"&gt;Gemma 4 on Hugging Face&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ai.google.dev/gemma/docs/core/model_card_4" rel="noopener noreferrer"&gt;Gemma 4 Model Card&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.openclaw.ai/cli/models" rel="noopener noreferrer"&gt;OpenClaw Models CLI Docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ai.google.dev/gemma/docs/get_started" rel="noopener noreferrer"&gt;Get started with Gemma&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>gemma</category>
      <category>ai</category>
      <category>google</category>
    </item>
  </channel>
</rss>
