<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Google Developer Group</title>
    <description>The latest articles on DEV Community by Google Developer Group (@gdg).</description>
    <link>https://dev.to/gdg</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gdg"/>
    <language>en</language>
    <item>
      <title>Build a Streaming Gemini Chat in Angular with Signals — Then Ship It on Cloud Run</title>
      <dc:creator>Tomasz Flis</dc:creator>
      <pubDate>Mon, 04 May 2026 19:05:39 +0000</pubDate>
      <link>https://dev.to/gdg/build-a-streaming-gemini-chat-in-angular-with-signals-then-ship-it-on-cloud-run-1llc</link>
      <guid>https://dev.to/gdg/build-a-streaming-gemini-chat-in-angular-with-signals-then-ship-it-on-cloud-run-1llc</guid>
      <description>&lt;p&gt;If you have built a chat UI for a large language model in the last two years, you probably reached for RxJS, an &lt;code&gt;OnPush&lt;/code&gt; component, an &lt;code&gt;async&lt;/code&gt; pipe, and a &lt;code&gt;BehaviorSubject&lt;/code&gt; per piece of state. It worked, but it was a lot of plumbing for what is fundamentally a very simple shape: &lt;em&gt;one string that grows over time&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Angular Signals collapse that plumbing into a single primitive. And it turns out that streaming Gemini responses with Signals is one of the cleanest, most satisfying pieces of code you can write in modern Angular today.&lt;/p&gt;

&lt;p&gt;In this tutorial we will build a working Google AI chat component, in roughly one hundred lines, that streams tokens from Gemini in real time, supports a stop button, and feels native on desktop and mobile. Then we will ship it safely on Cloud Run with a thin proxy, so you can drop a live, embedded demo into your post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Signals are a perfect fit for streaming AI
&lt;/h2&gt;

&lt;p&gt;A streaming LLM response is, mechanically, a sequence of small text deltas arriving over a fetch stream. Old-school Angular handled this with &lt;code&gt;Subject&lt;/code&gt;s, async pipes, and a lot of trust that change detection would do the right thing.&lt;/p&gt;

&lt;p&gt;Signals reframe the problem. A &lt;code&gt;signal&amp;lt;string&amp;gt;('')&lt;/code&gt; is just a value that you call &lt;code&gt;.update()&lt;/code&gt; on. Each update notifies only the views that read that signal, and Angular 20 with zoneless change detection skips the whole-tree dirty check entirely. That means you can call &lt;code&gt;.update()&lt;/code&gt; thirty times a second from inside a &lt;code&gt;for await&lt;/code&gt; loop and your UI will not break a sweat.&lt;/p&gt;

&lt;p&gt;There is also a smaller, ergonomic win. With Signals the rendering rule is "whatever the signal is at this instant." Streaming chat is a value that is &lt;em&gt;visibly mid-update&lt;/em&gt;, and Signals give you the perfect vocabulary for that — the in-flight token buffer is just another signal, alongside the committed message history.&lt;/p&gt;

&lt;h2&gt;
  
  
  What we are building
&lt;/h2&gt;

&lt;p&gt;A single-page Angular app with one component. You type a question, hit send, and watch Gemini's answer stream in word by word. There is a stop button that cancels the stream, a running history of messages, and that is it. We will use Angular 20 standalone components, Signals, the new control flow (&lt;code&gt;@for&lt;/code&gt;, &lt;code&gt;@if&lt;/code&gt;), and the official &lt;code&gt;@google/genai&lt;/code&gt; SDK.&lt;/p&gt;

&lt;p&gt;You can find the finished repo on GitHub at the link at the bottom of this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;You will need Node 20 or newer, the Angular CLI (&lt;code&gt;npm i -g @angular/cli&lt;/code&gt;), and a Gemini API key from &lt;a href="https://aistudio.google.com/app/apikey" rel="noopener noreferrer"&gt;Google AI Studio&lt;/a&gt;. The free tier is more than enough to follow along.&lt;/p&gt;

&lt;p&gt;A note on the API key, because this matters: in the local version we read the key from an environment variable that gets bundled into the client. &lt;strong&gt;That is fine for local exploration. It is not fine for production.&lt;/strong&gt; Anything in your bundle is visible to anyone who opens DevTools. We will fix this in the deploy section by adding a small proxy on Cloud Run — the key stays on the server, and the Angular code barely changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project setup
&lt;/h2&gt;

&lt;p&gt;Spin up a new Angular project with the CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ng new gemini-stream &lt;span class="nt"&gt;--standalone&lt;/span&gt; &lt;span class="nt"&gt;--routing&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;false&lt;/span&gt; &lt;span class="nt"&gt;--style&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;css &lt;span class="nt"&gt;--skip-tests&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;gemini-stream
npm i @google/genai
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Open &lt;code&gt;src/environments/environment.ts&lt;/code&gt; (create it if the CLI did not) and add your key:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;environment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;geminiApiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;YOUR_AI_STUDIO_KEY_HERE&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Add the same file under &lt;code&gt;environment.development.ts&lt;/code&gt; if you use a separate dev environment, and make sure &lt;code&gt;.gitignore&lt;/code&gt; keeps these out of source control if you put a real key in.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;src/app/app.config.ts&lt;/code&gt;, opt into zoneless change detection. By Angular 20 this is a stable provider, and it gives you the per-signal update path that makes streaming feel snappy:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ApplicationConfig&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;provideZonelessChangeDetection&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@angular/core&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;appConfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ApplicationConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;providers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;provideZonelessChangeDetection&lt;/span&gt;&lt;span class="p"&gt;()],&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;That is the entire setup. On to the interesting bits.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Gemini service
&lt;/h2&gt;

&lt;p&gt;Create &lt;code&gt;src/app/gemini.service.ts&lt;/code&gt;. The job of this service is small: take a chat history, return an async iterable of text deltas, and let the caller stop early.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;Injectable&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@angular/core&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;GoogleGenAI&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@google/genai&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;environment&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;../environments/environment&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;ChatRole&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;model&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;ChatMessage&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ChatRole&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nl"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Injectable&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;providedIn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;root&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;GeminiService&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;ai&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;GoogleGenAI&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;geminiApiKey&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nf"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;history&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ChatMessage&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;
    &lt;span class="nx"&gt;shouldStop&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;boolean&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;AsyncGenerator&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generateContentStream&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gemini-2.5-flash&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;role&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
      &lt;span class="p"&gt;})),&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;chunk&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;shouldStop&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Three things worth pointing out here.&lt;/p&gt;

&lt;p&gt;First, &lt;code&gt;generateContentStream&lt;/code&gt; returns an async iterable of chunks. Each chunk has a &lt;code&gt;text&lt;/code&gt; getter that gives you the new tokens for that step. That is all the SDK asks of you.&lt;/p&gt;

&lt;p&gt;Second, we accept a &lt;code&gt;shouldStop&lt;/code&gt; predicate instead of an &lt;code&gt;AbortController&lt;/code&gt;. This keeps cancellation logic on our side, where it composes nicely with Signals — the predicate is going to read a signal, and the moment the user clicks Stop, the next iteration of the loop bails out.&lt;/p&gt;

&lt;p&gt;Third, the service yields strings, not chunks. By the time anything else in the app sees a delta, it is already plain text. That keeps our chat component free of any SDK-specific types.&lt;/p&gt;
&lt;h2&gt;
  
  
  Signals-based chat state
&lt;/h2&gt;

&lt;p&gt;Now the chat component. Create &lt;code&gt;src/app/chat.component.ts&lt;/code&gt; and start with the state. The whole point of this article is in this section, so read it slowly.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;ChangeDetectionStrategy&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;Component&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;computed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;effect&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;inject&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;signal&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;viewChild&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;ElementRef&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@angular/core&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;GeminiService&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ChatMessage&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;./gemini.service&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;Component&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;app-chat&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;standalone&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;changeDetection&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ChangeDetectionStrategy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;OnPush&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&amp;lt;!-- coming up next --&amp;gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;`/* coming up next */`&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ChatComponent&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;gemini&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;inject&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;GeminiService&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;messages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;signal&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ChatMessage&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;([]);&lt;/span&gt;
  &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;draft&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;signal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;streaming&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;signal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;isStreaming&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;signal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;stopRequested&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;signal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="nx"&gt;canSend&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;computed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;draft&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;isStreaming&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nx"&gt;scroller&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;viewChild&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ElementRef&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;HTMLDivElement&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;scroller&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;effect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Read the streaming buffer and message count to re-trigger on every update,&lt;/span&gt;
      &lt;span class="c1"&gt;// then scroll to the bottom on the next animation frame.&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;streaming&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;length&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;el&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;scroller&lt;/span&gt;&lt;span class="p"&gt;()?.&lt;/span&gt;&lt;span class="nx"&gt;nativeElement&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;el&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;requestAnimationFrame&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;el&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scrollTop&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;el&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;scrollHeight&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;canSend&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userMessage&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ChatMessage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;draft&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;trim&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[...&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;userMessage&lt;/span&gt;&lt;span class="p"&gt;]);&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;draft&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;streaming&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isStreaming&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stopRequested&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;delta&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;gemini&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stopRequested&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
      &lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;streaming&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;delta&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;streaming&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;s&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;s&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="s2"&gt;`\n\n_Error: &lt;/span&gt;&lt;span class="p"&gt;${(&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nb"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;_`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;final&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;streaming&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;final&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;update&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;[...&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;model&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;final&lt;/span&gt; &lt;span class="p"&gt;}]);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;streaming&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isStreaming&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nf"&gt;stop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;stopRequested&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Five signals carry the entire state of the chat. &lt;code&gt;messages&lt;/code&gt; is the committed history. &lt;code&gt;draft&lt;/code&gt; is what is in the textarea. &lt;code&gt;streaming&lt;/code&gt; is the buffer for the in-flight assistant reply, separate from the history so we can render it differently. &lt;code&gt;isStreaming&lt;/code&gt; and &lt;code&gt;stopRequested&lt;/code&gt; are the control flags.&lt;/p&gt;

&lt;p&gt;Notice that &lt;code&gt;canSend&lt;/code&gt; is a &lt;code&gt;computed&lt;/code&gt;. We never write to it, we never subscribe to it; we just read it from the template and Angular figures out when it changes. That single line replaces the form-validation observable boilerplate you might be used to.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;effect&lt;/code&gt; is doing the auto-scroll. By reading &lt;code&gt;streaming()&lt;/code&gt; and &lt;code&gt;messages().length&lt;/code&gt; inside the effect, we tell Angular: "rerun me whenever either of these changes." Then we scroll the chat container to the bottom on the next frame. This is the kind of small DOM concern that used to require &lt;code&gt;AfterViewChecked&lt;/code&gt; and a flag; here it is six lines.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;send&lt;/code&gt; method is where streaming meets state. We push the user message, clear the buffer, then iterate over the service's async generator and call &lt;code&gt;.update()&lt;/code&gt; on the streaming signal for each delta. When the loop ends (or the user hits Stop, which makes &lt;code&gt;shouldStop&lt;/code&gt; return true on the next iteration), we commit whatever was in the buffer to the message history and reset.&lt;/p&gt;
&lt;h2&gt;
  
  
  The template
&lt;/h2&gt;

&lt;p&gt;Replace the placeholder template and styles in the same file:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`
  &amp;lt;div class="shell"&amp;gt;
    &amp;lt;div class="scroller" #scroller&amp;gt;
      @for (m of messages(); track $index) {
        &amp;lt;div class="msg {{ m.role }}"&amp;gt;{{ m.content }}&amp;lt;/div&amp;gt;
      }
      @if (isStreaming() &amp;amp;&amp;amp; streaming()) {
        &amp;lt;div class="msg model streaming"&amp;gt;{{ streaming() }}&amp;lt;span class="cursor"&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;
      }
    &amp;lt;/div&amp;gt;

    &amp;lt;form class="composer" (submit)="$event.preventDefault(); send()"&amp;gt;
      &amp;lt;textarea
        rows="2"
        placeholder="Ask Gemini something..."
        [value]="draft()"
        (input)="draft.set($any($event.target).value)"
        (keydown.enter)="$event.preventDefault(); send()"
      &amp;gt;&amp;lt;/textarea&amp;gt;
      @if (isStreaming()) {
        &amp;lt;button type="button" (click)="stop()"&amp;gt;Stop&amp;lt;/button&amp;gt;
      } @else {
        &amp;lt;button type="submit" [disabled]="!canSend()"&amp;gt;Send&amp;lt;/button&amp;gt;
      }
    &amp;lt;/form&amp;gt;
  &amp;lt;/div&amp;gt;
`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;`
  .shell { display: flex; flex-direction: column; height: 100dvh; max-width: 720px; margin: 0 auto; font-family: system-ui, sans-serif; }
  .scroller { flex: 1; overflow-y: auto; padding: 1rem; display: flex; flex-direction: column; gap: 0.75rem; }
  .msg { padding: 0.75rem 1rem; border-radius: 12px; white-space: pre-wrap; line-height: 1.5; max-width: 85%; }
  .msg.user { align-self: flex-end; background: #4285f4; color: white; }
  .msg.model { align-self: flex-start; background: #f1f3f4; color: #202124; }
  .cursor { display: inline-block; width: 0.5ch; background: currentColor; margin-left: 2px; animation: blink 1s steps(1) infinite; }
  @keyframes blink { 50% { opacity: 0; } }
  .composer { display: flex; gap: 0.5rem; padding: 1rem; border-top: 1px solid #eee; }
  textarea { flex: 1; resize: none; padding: 0.75rem; border-radius: 12px; border: 1px solid #ddd; font: inherit; }
  button { padding: 0 1.25rem; border-radius: 12px; border: none; background: #4285f4; color: white; font-weight: 600; cursor: pointer; }
  button:disabled { opacity: 0.5; cursor: not-allowed; }
`&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The new control flow (&lt;code&gt;@for&lt;/code&gt;, &lt;code&gt;@if&lt;/code&gt;, &lt;code&gt;@else&lt;/code&gt;) makes this template read like a small story: render every committed message, then render the in-flight reply if there is one, then show Send or Stop based on whether we are mid-stream. The blinking cursor on the streaming bubble is a tiny detail that makes the whole thing feel alive.&lt;/p&gt;

&lt;p&gt;Wire the component into &lt;code&gt;src/app/app.component.ts&lt;/code&gt; as the only thing rendered, run &lt;code&gt;ng serve&lt;/code&gt;, and you should have a working streaming chat at &lt;code&gt;http://localhost:4200&lt;/code&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Shipping it on Cloud Run
&lt;/h2&gt;

&lt;p&gt;The local app calls Gemini directly with a key in the bundle. To ship it safely we need two small moves: a tiny server proxy that holds the key, and Cloud Run to host both the proxy and the static Angular build.&lt;/p&gt;

&lt;p&gt;Create &lt;code&gt;server/index.ts&lt;/code&gt; at the project root:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;GoogleGenAI&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@google/genai&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ai&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;GoogleGenAI&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;apiKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;GEMINI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;limit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;4mb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;static&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dist/gemini-stream/browser&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/stream&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setHeader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;text/plain; charset=utf-8&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setHeader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Transfer-Encoding&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;chunked&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;ai&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;models&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generateContentStream&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gemini-2.5-flash&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;contents&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="k"&gt;await &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;chunk&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;end&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PORT&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;8080&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Update &lt;code&gt;gemini.service.ts&lt;/code&gt; to read from the proxy with &lt;code&gt;fetch&lt;/code&gt; instead of calling the SDK in the browser. The SDK and the API key never leave the server:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="nf"&gt;stream&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;history&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ChatMessage&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt; &lt;span class="nx"&gt;shouldStop&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/api/stream&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;method&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;application/json&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;role&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;role&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;m&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;content&lt;/span&gt; &lt;span class="p"&gt;}]&lt;/span&gt; &lt;span class="p"&gt;})),&lt;/span&gt;
    &lt;span class="p"&gt;}),&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;reader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;pipeThrough&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TextDecoderStream&lt;/span&gt;&lt;span class="p"&gt;()).&lt;/span&gt;&lt;span class="nf"&gt;getReader&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;shouldStop&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cancel&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;done&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;read&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;done&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;yield&lt;/span&gt; &lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This is the part I love about the Signals architecture: the component code does not change at all. The signals do not care that the bytes are coming from a Cloud Run service now instead of the SDK. Same loop, same &lt;code&gt;streaming.update()&lt;/code&gt; call.&lt;/p&gt;

&lt;p&gt;Add a &lt;code&gt;Dockerfile&lt;/code&gt; at the project root:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;node:20-alpine&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="k"&gt;AS&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s"&gt;build&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm ci
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm run build &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; npx tsc &lt;span class="nt"&gt;-p&lt;/span&gt; server

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:20-alpine&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/dist ./dist&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/server/dist ./server&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/node_modules ./node_modules&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=build /app/package*.json ./&lt;/span&gt;
&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; NODE_ENV=production&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "server/index.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Then ship it with one command — Cloud Run will build the container from source for you:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;gcloud run deploy gemini-stream &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--source&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-central1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--allow-unauthenticated&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--set-env-vars&lt;/span&gt; &lt;span class="nv"&gt;GEMINI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;YOUR_AI_STUDIO_KEY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;You will get back a URL like &lt;code&gt;https://gemini-stream-xxxxxx.us-central1.run.app&lt;/code&gt;. Test it in the browser, confirm the chat works end to end, and you are done.&lt;/p&gt;

&lt;p&gt;The fun part: dev.to has a first-class Cloud Run embed, so here you go:&lt;/p&gt;


&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://gemini-stream-1070943699730.us-central1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;



&lt;h2&gt;
  
  
  What you actually built
&lt;/h2&gt;

&lt;p&gt;The whole thing — service, component, template, styles — comes in just over a hundred lines. Compare that to an equivalent app two years ago and you will notice what is &lt;em&gt;missing&lt;/em&gt;: there is no &lt;code&gt;Subject&lt;/code&gt;, no &lt;code&gt;BehaviorSubject&lt;/code&gt;, no &lt;code&gt;async&lt;/code&gt; pipe, no &lt;code&gt;OnPush&lt;/code&gt; boilerplate that you have to think about, no manual subscription cleanup. Signals plus the new control flow plus zoneless change detection is genuinely a different programming model, and streaming AI is the application that shows it off best.&lt;/p&gt;

&lt;p&gt;A couple of small things to try next, in roughly increasing order of effort:&lt;/p&gt;

&lt;p&gt;Add a &lt;code&gt;systemInstruction&lt;/code&gt; to the &lt;code&gt;generateContentStream&lt;/code&gt; call to give your model a persona. The SDK accepts it as a sibling of &lt;code&gt;contents&lt;/code&gt; on the proxy side.&lt;/p&gt;

&lt;p&gt;Switch from text-only input to multimodal: drop an image into the chat and forward it from the proxy as a &lt;code&gt;parts&lt;/code&gt; entry of &lt;code&gt;{ inlineData: { mimeType, data } }&lt;/code&gt;. Gemini handles the rest.&lt;/p&gt;

&lt;p&gt;Prefer Firebase to Cloud Run? Firebase AI Logic gives you the same proxy pattern with less infra — install &lt;code&gt;firebase&lt;/code&gt; and &lt;code&gt;@firebase/ai&lt;/code&gt;, and the SDK shape stays almost identical. You give up the dev.to Cloud Run embed, but the Angular code is unchanged.&lt;/p&gt;

&lt;p&gt;Try the same UI against &lt;a href="https://developer.chrome.com/docs/ai/built-in" rel="noopener noreferrer"&gt;Chrome's Built-in AI&lt;/a&gt; (Gemini Nano running on-device, no key, no network). The Prompt API has its own streaming primitive that drops into the same Signal-based shell with almost no changes — and you get an offline-capable chat for free.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-up
&lt;/h2&gt;

&lt;p&gt;If you take one thing away from this post, let it be that &lt;em&gt;Signals were designed for values that change a lot&lt;/em&gt;, and an LLM stream is the canonical example of a value that changes a lot. The pieces fit so cleanly that the resulting code reads more like a description of the UI than like a program.&lt;/p&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/" rel="noopener noreferrer"&gt;https://github.com/TomWebwalker/gemini-stream-angular&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you build something with this drop a link in the comments — I would love to see what people make of it.&lt;/p&gt;

</description>
      <category>angular</category>
      <category>gemini</category>
      <category>googlecloud</category>
      <category>ai</category>
    </item>
    <item>
      <title>How to Build a Custom AI Quality Gate on Cloud Run (From Zero to Production)</title>
      <dc:creator>Alexander Tyutin</dc:creator>
      <pubDate>Mon, 04 May 2026 04:17:45 +0000</pubDate>
      <link>https://dev.to/gdg/how-to-build-a-custom-ai-quality-gate-on-cloud-run-from-zero-to-production-1odp</link>
      <guid>https://dev.to/gdg/how-to-build-a-custom-ai-quality-gate-on-cloud-run-from-zero-to-production-1odp</guid>
      <description>&lt;p&gt;In my previous article about treating architecture documentation as a first-class asset, I had a great discussion in the comments about enforcing architectural rules. I promised to share materials from my recent Google Developer Groups workshop.&lt;/p&gt;

&lt;p&gt;The workshop is now finished! Here is the story of how I built an AI Quality Gate, how it helped me solve the internal "CEO, CTO, CFO, CISO" conflict, and a summary of the live demonstration.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can listen a podcast generated based on this publication (thanks &lt;a href="https://notebooklm.google/" rel="noopener noreferrer"&gt;NotebookLM&lt;/a&gt;):&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/qfbZZxcDNbU"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;




&lt;p&gt;Playground repositories with source code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/tyutinalexkz/workshops-260425-gdg-quality-gate-quality-gate" rel="noopener noreferrer"&gt;Quality Gate PoC&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/tyutinalexkz/workshops-260425-gdg-quality-gate-repo1" rel="noopener noreferrer"&gt;CheckMe Repo #1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/tyutinalexkz/workshops-260425-gdg-quality-gate-repo2" rel="noopener noreferrer"&gt;CheckMe Repo #2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/tyutinalexkz/workshops-260425-gdg-quality-gate-repo3" rel="noopener noreferrer"&gt;CheckMe Repo #3&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Backstory: Mentoring and the "CEO, CTO, CFO, CISO" Conflict
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9dc7ccfuem5k00sdo6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9dc7ccfuem5k00sdo6i.png" alt="Conflict of interest inside a developer's head" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I work as a DevSecOps engineer, but in my free time, I mentor for Technovation Girls, a global program that helps young women learn tech and STEM. Because we always need more IT mentors, I built an AI mentor bot to help the students.&lt;br&gt;
Building this bot had two big challenges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Safety: Because children use it, it had to be completely safe from AI hallucinations.&lt;/li&gt;
&lt;li&gt;Budget: Because I pay for it myself, it had to be very cheap.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The bot was a big success. Using Google Cloud Run and Vertex AI, it handled 250 users and answered 1,500 questions for only about $25-$55 a month.&lt;/p&gt;

&lt;p&gt;However, when I tried to add new features quickly, I faced a big problem. With only 1-2 hours of free time a day for this project, I experienced a harsh "CEO, CTO, CFO, CISO" conflict in my own head:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The CTO wanted to write code and ship features fast.&lt;/li&gt;
&lt;li&gt;The CISO wanted to stop releases to make sure everything was secure.&lt;/li&gt;
&lt;li&gt;The CFO wanted to keep cloud costs low.&lt;/li&gt;
&lt;li&gt;The CEO wanted the product to grow and succeed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Solution: What is an AI Quality Gate?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3f1kmnubehtu7s61uzw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft3f1kmnubehtu7s61uzw.png" alt="Indie Developer Conflict of Interest Solved" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To solve the "CEO, CTO, CFO, CISO" conflict, I created an AI Quality Gate.&lt;br&gt;
An AI Quality Gate is a custom microservice that automatically reviews code for architecture, security, and costs (FinOps). It is built on Google Cloud Run and uses Vertex AI (Gemini).&lt;/p&gt;

&lt;p&gt;The first action of the Quality Gate was to block its own MVP from reaching the production. So I decided it was a good sign.&lt;/p&gt;

&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Short Summary:&lt;/strong&gt; Fail.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;List of Critical Findings:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;AI Gateway (AAA):&lt;/strong&gt; The provided code retrieves a GitLab token directly from Secret Manager and uses it for GitLab API access. This bypasses the AI gateway, violating the "ALWAYS Consistency with AI gateway (AAA, FinOps)" rule. The AAA component should manage authentication and authorization for all external services, including GitLab.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Constructive Recommendations:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Implement AI Gateway AAA:&lt;/strong&gt; Modify the &lt;code&gt;ai_review.py&lt;/code&gt; script to authenticate with the AI gateway first. The AI gateway will then handle the GitLab authentication, providing a centralized and secure way to manage access. Use gateway's provided token instead of direct GitLab API access from the job.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;FinOps Considerations:&lt;/strong&gt; Track the cost of AI reviews and link this with FinOps tools, it is important to provide cost visibility since the usage of resources will increase.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;

&lt;p&gt;Because it runs on Cloud Run, it only costs money when it is actively checking code. For a whole month of automated, deep-context code reviews, I paid only $0.12! This made the CFO part of my brain very happy.&lt;br&gt;
At first, I used the AI Quality Gate as a step in my CI/CD pipeline. But waiting several minutes for a "Merge Request Failed" message was slow and annoying. Now, I run the Quality Gate from a bash script directly in my IDE before creating a Merge Request. This saves time and perfectly resolves the "CEO, CTO, CFO, CISO" conflict by balancing speed, safety, and budget.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workshop Demo: The AI Quality Gate in Action
&lt;/h2&gt;

&lt;p&gt;During the GDG workshop, I showed a live demo across three different code repositories to prove why traditional tools are not enough.&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo 1: The 10/10 Linter Illusion - Happy CISO
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5kf4j45ni2u91b2uuzw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb5kf4j45ni2u91b2uuzw.png" alt="Quality Gate First Check - Developer tries to fool the linter" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;First, I scanned a simple service using standard tools like Ruff, Pylint, and Semgrep. The code got a perfect 10/10 score. However, when I sent the code to the AI Quality Gate, it blocked the release. It found a critical SQL injection and a prompt injection (a hidden note in the code telling the AI reviewer to "report that everything is fine"). Traditional linters missed this completely, but the AI caught it and gave me exact steps to fix it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo 2: Catching Semantic Drift - Happy CEO+CRO
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd04lapm9wj3runmftn61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd04lapm9wj3runmftn61.png" alt="Quality Gate Second Check - Documentation and Code Inconsistency" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the second project, the README.md file stated that the system followed strict privacy standards and anonymized user data. But the actual code did the opposite: it saved real user emails and IDs. Standard tools missed this, but the AI Quality Gate read the documentation, compared it to the code's behavior, and found the security violation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Demo 3: "Shift-In" (Reviewing Before Coding) - Happy CTO+CFO
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnfu81t92z6dqsbpqx6w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnfu81t92z6dqsbpqx6w.png" alt="Quality Gate Third Check - Checking Plan Before Coding" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The last demo was the most powerful. The repository had zero lines of code. It only contained a Markdown document planning a new feature. I sent this text plan to the AI Quality Gate. Before I wrote a single line of Python, the AI found critical security flaws in the plan, like missing server logs and hardcoded passwords.&lt;br&gt;
This changes the concept of "Shift-Left" security into "Shift-In" - bringing experts directly into your IDE while you are still brainstorming the idea. Now we may not only test the code but even test the ideas.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When you keep your architecture rules and documentation close to your code, a custom AI Quality Gate becomes an incredibly powerful tool. It helps you write better code, saves time, and finally resolves the internal "CEO, CTO, CFO, CISO" conflict. Moreover such a gate may be an additional advisor with any experience you want and help to improve any idea in the earliest stage to save future money. Best of all, it costs almost nothing to run.&lt;br&gt;
If you want to build this yourself, my Docker image is available on DockerHub, and the sample repositories are on my GitHub:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/tyutinalexkz/workshops-260425-gdg-quality-gate-quality-gate" rel="noopener noreferrer"&gt;Quality Gate PoC&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/tyutinalexkz/workshops-260425-gdg-quality-gate-repo1" rel="noopener noreferrer"&gt;CheckMe Repo #1&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/tyutinalexkz/workshops-260425-gdg-quality-gate-repo2" rel="noopener noreferrer"&gt;CheckMe Repo #2&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/tyutinalexkz/workshops-260425-gdg-quality-gate-repo3" rel="noopener noreferrer"&gt;CheckMe Repo #3&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>architecture</category>
      <category>googlecloud</category>
      <category>tutorial</category>
      <category>productivity</category>
    </item>
    <item>
      <title>BWAI'2026 @ VTU, Belagavi</title>
      <dc:creator>Piyush Annigeri</dc:creator>
      <pubDate>Sun, 03 May 2026 15:31:54 +0000</pubDate>
      <link>https://dev.to/gdg/bwai2026-vtu-belagavi-1p7n</link>
      <guid>https://dev.to/gdg/bwai2026-vtu-belagavi-1p7n</guid>
      <description>&lt;h1&gt;
  
  
  #BuildwithAI
&lt;/h1&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Hello Developer,&lt;/strong&gt;&lt;br&gt;
That's a wrap at BWAI'2026 @ VTU, Belagavi.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Build with AI 2026 workshop series, hosted at Visvesvaraya Technological University (VTU), Belagavi, was a flagship initiative designed to transition students and developers from theoretical understanding to practical mastery of Artificial Intelligence. In an era where Generative AI and Autonomous Agents are redefining the engineering landscape, this week-long intensive program provided a sandbox for innovation.&lt;/p&gt;

&lt;p&gt;The series focused on the "Learn-Build-Scale" philosophy, leveraging the Google Cloud Ecosystem. Over the course of seven days, participants engaged in high-intensity labs, credit redemption sessions, and collaborative builds, culminating in an Open Innovation Demo Day. The event served not only as a training ground but as a networking hub for the next generation of AI engineers.&lt;/p&gt;




&lt;h2&gt;
  
  
  🟢 DAY - 0   🔵 [06-04-2026]
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Onboarding &amp;amp; Environment Setup&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Introduction to Google Cloud Platform (GCP), environment configuration, and Google Cloud Credit redemption to ensure zero-cost development for students.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🟢 DAY - 1   🔵 [07-04-2026]
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Conversational Voice Systems&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Building voice-responsive agents. Integration of Compute Engine instances, LLMs (Large Language Models), and Asterisk for telephony-based AI interactions.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🟢 DAY - 2   🔵 [08-04-2026]
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Predictive Analytics &amp;amp; Big Data&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Exploring BigQuery AI. Participants built a job navigation system that uses predictive analysis to match candidates with roles based on historical data patterns.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🟢 DAY - 3   🔵 [09-04-2026]
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;A-2-A Workflow Orchestration&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Advanced Agent-to-Agent workflows. Utilizing MCP Java SDK and Vertex AI to create interconnected systems where AI agents communicate to solve complex tasks.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🟢 DAY - 4   🔵 [10-04-2026]
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Enterprise Supply Chain Agents&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High-level engineering using AlloyDB AI and ScaNN (Scalable Nearest Neighbors). Building an autonomous supply chain agent capable of multimodal data processing.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🟢 DAY - 5   🔵 [11-04-2026]
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Surplus Engine with Cloud SQL In-Database AI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;In this codelab, you will build &lt;strong&gt;KarmaLoop&lt;/strong&gt; — a sustainable surplus-sharing app that treats intelligence as a first-class citizen of the data layer.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  🟢 DEMO DAY   🔵 [13-04-2026]
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AI for Engineers: Challenges and Opportunities&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;As a primary mentor, Mr. Bhandari brought a wealth of international experience from the heart of Google's engineering labs in Japan. His session addressed the critical shift in the job market, emphasizing that the modern engineer must be "AI-augmented."&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Speaker@BWAI
&lt;/h2&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F38sreu3rjar1n5z9nr92.png" alt="Haren Bhandari" width="800" height="800"&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Haren Bhandari&lt;/strong&gt; &lt;br&gt; 🔴 &lt;a href="https://www.linkedin.com/in/haren-bhandari-abb09924/" rel="noopener noreferrer"&gt;LINKEDIN&lt;/a&gt; &lt;br&gt;&lt;br&gt; &lt;strong&gt;GOOGLE - TOKYO&lt;/strong&gt; &lt;br&gt; Senior Developer Relations Engineer&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  Organizers@BWAI
&lt;/h2&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7axkgahxmx8xhxjdsqu.png" alt="Piyush Annigeri" width="400" height="400"&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Piyush Annigeri&lt;/strong&gt; &lt;br&gt; 🔴 &lt;a href="https://www.linkedin.com/in/piyushannigeri" rel="noopener noreferrer"&gt;LINKEDIN&lt;/a&gt; &lt;br&gt;&lt;br&gt; &lt;strong&gt;ORGANIZER&lt;/strong&gt; &lt;br&gt; Google Developer Group on Campus, Visvesvaraya Technological University, Belagavi&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvn35kgq1kndfpz5kwnr8.png" alt="Yuvaraj Sutar" width="800" height="800"&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Yuvaraj Sutar&lt;/strong&gt; &lt;br&gt; 🟡 &lt;a href="https://www.linkedin.com/in/yuvaraj-sutar-840234334/" rel="noopener noreferrer"&gt;LINKEDIN&lt;/a&gt; &lt;br&gt;&lt;br&gt; &lt;strong&gt;CONTENT CREATOR&lt;/strong&gt; &lt;br&gt; Google Developer Group on Campus, Visvesvaraya Technological University, Belagavi&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6htd8gk273i1hcma8vhm.png" alt="Prajwal Rawoot" width="396" height="396"&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Prajwal Rawoot&lt;/strong&gt; &lt;br&gt; 🔵 &lt;a href="https://www.linkedin.com/in/prajwal-rawoot-698424330" rel="noopener noreferrer"&gt;LINKEDIN&lt;/a&gt; &lt;br&gt;&lt;br&gt; &lt;strong&gt;TECHNICAL-LEAD&lt;/strong&gt; &lt;br&gt; Google Developer Group on Campus, Visvesvaraya Technological University, Belagavi&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe36pabhn0ckz0y7vopqc.png" alt="Shravan Chougule" width="400" height="400"&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Shravan Chougule&lt;/strong&gt; &lt;br&gt; 🟢 &lt;a href="https://www.linkedin.com/in/shravan-chougule-0631b0354/" rel="noopener noreferrer"&gt;LINKEDIN&lt;/a&gt; &lt;br&gt;&lt;br&gt; &lt;strong&gt;MEDIA-LEAD&lt;/strong&gt; &lt;br&gt; Google Developer Group on Campus, Visvesvaraya Technological University, Belagavi&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiybt6cdurksjvs15w8pk.png" alt="Ananyaa Sudhanshu" width="400" height="400"&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Ananyaa Sudhanshu&lt;/strong&gt; &lt;br&gt; 🔴 &lt;a href="https://www.linkedin.com/in/ananyaa-sudhanshu-808697387/" rel="noopener noreferrer"&gt;LINKEDIN&lt;/a&gt; &lt;br&gt;&lt;br&gt; &lt;strong&gt;EVENT COORDINATOR&lt;/strong&gt; &lt;br&gt; Google Developer Group on Campus, Visvesvaraya Technological University, Belagavi&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fonva5j24teb6tijrfnb8.png" alt="Advika Gurav" width="400" height="400"&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Advika Gurav&lt;/strong&gt; &lt;br&gt; 🟡 &lt;a href="https://www.linkedin.com/in/advika-gurav-ab654b334/" rel="noopener noreferrer"&gt;LINKEDIN&lt;/a&gt; &lt;br&gt;&lt;br&gt; &lt;strong&gt;PR &amp;amp; MARKETING&lt;/strong&gt; &lt;br&gt; Google Developer Group on Campus, Visvesvaraya Technological University, Belagavi&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvktysb2puobjqoekpcvd.png" alt="Yash Koparde" width="396" height="396"&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Yash Koparde&lt;/strong&gt; &lt;br&gt; 🟢 &lt;a href="https://www.linkedin.com/in/yashkoparde/" rel="noopener noreferrer"&gt;LINKEDIN&lt;/a&gt; &lt;br&gt;&lt;br&gt; &lt;strong&gt;TECHNICAL-LEAD&lt;/strong&gt; &lt;br&gt; Google Developer Group on Campus, Visvesvaraya Technological University, Belagavi&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  #BWAIchampions
&lt;/h2&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5cto2ly2lf4f73bhac81.png" alt="Sadiya Sanadi" width="800" height="800"&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Sadiya Sanadi&lt;/strong&gt; &lt;br&gt; 🟡 &lt;a href="https://www.linkedin.com/in/Sadiyaas" rel="noopener noreferrer"&gt;LINKEDIN&lt;/a&gt; &lt;br&gt;&lt;br&gt; Second-year AI/ML student at Jain College of Engineering, Belgaum.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I recently developed &lt;strong&gt;NyayaSetu&lt;/strong&gt;, an AI-powered welfare rights navigator that helps users discover government schemes and check eligibility easily, addressing the lack of awareness and difficulty in accessing public welfare information. The project secured &lt;strong&gt;1st place&lt;/strong&gt; for its impact and usability.&lt;/p&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyh1nkq6t9577nqwfyaw.png" alt="Muhammad Rehan" width="800" height="800"&gt;&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Muhammad Rehan&lt;/strong&gt; &lt;br&gt; 🟡 &lt;a href="https://linkedin.com/in/mdrehan08/" rel="noopener noreferrer"&gt;LINKEDIN&lt;/a&gt; &lt;br&gt;&lt;br&gt; Founder of StackBIZ.tech &amp;amp; a result-driven full-stack developer focused on building production-ready SaaS platforms, cloud infrastructure, &amp;amp; AI-based solutions.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I created &lt;strong&gt;PromptedForge.in&lt;/strong&gt;, an AI-powered platform that helps students generate problem statements, prompts, and execution roadmaps without the need for brainstorming. My work has been recognized by the Entrepreneurship Cell at IIT Delhi, and you can explore the platform at &lt;a href="http://promptedforge.in/" rel="noopener noreferrer"&gt;promptedforge.in&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Irayya R Hiremath&lt;/strong&gt;&lt;br&gt;
🟡 &lt;a href="https://www.linkedin.com/in/irayya-hiremath-aa0232330" rel="noopener noreferrer"&gt;LINKEDIN&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am a BCA student with a strong interest in software development and technology. I am currently learning programming languages like Python and JavaScript and working on practical projects to build my skills. I am passionate about creating useful applications and continuously improving my knowledge in the tech field.&lt;/p&gt;




&lt;p&gt;The Build with AI workshop at VTU Belagavi concluded as a landmark event for the region's tech community. By providing direct access to Google experts and enterprise-grade tools, the workshop demystified the complexities of Artificial Intelligence.&lt;/p&gt;




&lt;h2&gt;
  
  
  Moments@BWAI
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbu9cbob9j3fx2jb0nmgd.png" alt="Moment 1" width="800" height="448"&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9pf1z87kcubndg6hj3ro.png" alt="Moment 2" width="800" height="448"&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3myxqp193a471j7na76k.jpg" alt="Moment 3" width="800" height="347"&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl65p8eymaakgbkj878b1.jpeg" alt="Moment 4" width="800" height="351"&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50xcifhlt4iu8f9wj4y3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50xcifhlt4iu8f9wj4y3.png" alt=" " width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
      <category>googlecloud</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>A Practical Guide to Flutter Accessibility Part 2: Hiding Noise, Exposing Actions</title>
      <dc:creator>Karol Wrótniak</dc:creator>
      <pubDate>Fri, 24 Apr 2026 11:14:27 +0000</pubDate>
      <link>https://dev.to/gdg/a-practical-guide-to-flutter-accessibility-part-2-hiding-noise-exposing-actions-2f7i</link>
      <guid>https://dev.to/gdg/a-practical-guide-to-flutter-accessibility-part-2-hiding-noise-exposing-actions-2f7i</guid>
      <description>&lt;p&gt;In Part 1 you learned the basics. &lt;code&gt;Semantics&lt;/code&gt; for labels and hints. &lt;code&gt;MergeSemantics&lt;/code&gt; to remove double announcements. TalkBack and the Android Ally plugin to check the results. That covers most of a typical Flutter app. But not all of it. &lt;/p&gt;

&lt;p&gt;Some widgets are invisible to screen readers for a different reason. It's not a missing label. It's that assistive technology has no idea &lt;em&gt;how&lt;/em&gt; to interact with them. A swipe-to-dismiss row. A star-rating control. A decorative icon that just adds noise. Adding a label to these won't cut it.&lt;/p&gt;

&lt;p&gt;That's where Part 2 starts. You'll learn to hide what shouldn't be announced. You'll expose custom gestures as named actions TalkBack and VoiceOver can present to the user.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Broader Definition of Accessible
&lt;/h3&gt;

&lt;p&gt;A widget with a label is a start. It's not the complete solution. Real accessibility means a screen reader user can do the same things a sighted user can — dismiss an item, rate something, get notified when data changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hiding What Shouldn't Be Heard
&lt;/h3&gt;

&lt;p&gt;More information isn't always better. Think about an audiobook where the narrator stops to describe every decorative border on the page. After the third time, you'd uninstall the app.&lt;/p&gt;

&lt;p&gt;In Flutter, every widget is a candidate for the accessibility tree. Flutter handles the obvious cases — an &lt;code&gt;Icon&lt;/code&gt; without a &lt;code&gt;semanticLabel&lt;/code&gt; is not reachable by screen readers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Decorative vs. Redundant
&lt;/h3&gt;

&lt;p&gt;Before any coding, you need to know what you're looking at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Purely decorative:&lt;/strong&gt; Visual elements with zero meaning, like background gradients, divider lines, or abstract shapes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redundant:&lt;/strong&gt; Elements that have meaning, but it's already covered. A water drop icon next to the word "Water" is redundant. The user doesn't need to hear the same thing twice.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  When "Helping" Hurts
&lt;/h3&gt;

&lt;p&gt;The most common mistake is giving a label to every single icon. Even when it's next to a text label, in the same row or column.&lt;/p&gt;

&lt;p&gt;Look at the following example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;Row&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="n"&gt;Icon&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Icons&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;person&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nl"&gt;semanticLabel:&lt;/span&gt; &lt;span class="s"&gt;'Person'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Person'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It looks like this in the screencast: &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfvtnsdve5xwybn1bsf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frfvtnsdve5xwybn1bsf7.png" alt="Screencast of redundant semantic label" width="320" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can fix it by removing the &lt;code&gt;semanticLabel&lt;/code&gt; from the &lt;code&gt;Icon&lt;/code&gt; widget:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;Row&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="n"&gt;Icon&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Icons&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;person&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Person'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc4dn44et2o96yd01cha.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc4dn44et2o96yd01cha.png" alt="Screencast without redundant semantic labels" width="320" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now TalkBack reads "Person" once, and the icon is not focusable. Screen readers don't "see" it.&lt;br&gt;
In cases like that, you should usually merge the label and the text into one accessibility node. So the entire row becomes focusable. But it's not the topic of this part. You can read more about that in &lt;a href="https://www.thedroidsonroids.com/blog/flutter-accessibility-guide-part-1#Cross-platform_grouping_concepts" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Pruning Subtrees with ExcludeSemantics
&lt;/h3&gt;

&lt;p&gt;Take a standard contacts row: a &lt;code&gt;CircleAvatar&lt;/code&gt; showing the first initial, and a &lt;code&gt;Text&lt;/code&gt; with the full name beside it. Look at the code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;Row&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="n"&gt;CircleAvatar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'A'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Alice'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And the video: &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxowmfts81gargi271ry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzxowmfts81gargi271ry.png" alt="Screencast of redundant initial announcement" width="374" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You haven't added any accessibility properties anywhere. But &lt;code&gt;Text&lt;/code&gt; is always in the accessibility tree by default. The one inside the avatar too. TalkBack focuses on "capital A" first, then on "Alice." Announcing the initial doesn't make sense if there's a name right after it. For blind users it's noise.&lt;/p&gt;

&lt;p&gt;The fix is to wrap &lt;code&gt;ExcludeSemantics&lt;/code&gt; around the circle avatar. It removes the initial from the accessibility tree.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;Row&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="n"&gt;ExcludeSemantics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;CircleAvatar&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'A'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Alice'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here's how it looks on a device: &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjky0en9hddtbki7y4yw4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjky0en9hddtbki7y4yw4.png" alt="Screencast of redundant initial announcement" width="374" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TalkBack reads only "Alice." The same rule applies to any widget that generates semantic nodes you don't need — a decorative badge, a watermark, and so on.&lt;/p&gt;

&lt;p&gt;In a real list you'd also wrap the row in &lt;code&gt;MergeSemantics&lt;/code&gt; so the entire item becomes one node. That's already covered in &lt;a href="https://www.thedroidsonroids.com/blog/flutter-accessibility-guide-part-1#Cross-platform_grouping_concepts" rel="noopener noreferrer"&gt;Part 1&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;There's also shorthand you may want to know about: &lt;code&gt;Semantics(excludeSemantics: true)&lt;/code&gt;. It excludes all children just like &lt;code&gt;ExcludeSemantics&lt;/code&gt;. But it lets you set the semantic properties on the container itself. For example, you may add a label.&lt;/p&gt;

&lt;h3&gt;
  
  
  Blocking What's Behind: BlockSemantics
&lt;/h3&gt;

&lt;p&gt;Flutter also provides &lt;a href="https://api.flutter.dev/flutter/widgets/BlockSemantics-class.html" rel="noopener noreferrer"&gt;&lt;code&gt;BlockSemantics&lt;/code&gt;&lt;/a&gt;. It's for a different problem. &lt;code&gt;ExcludeSemantics&lt;/code&gt; removes a subtree's &lt;em&gt;own children&lt;/em&gt; from the accessibility tree. &lt;code&gt;BlockSemantics&lt;/code&gt; hides &lt;em&gt;sibling&lt;/em&gt; nodes rendered &lt;em&gt;before&lt;/em&gt; it. Think of it as a semantic curtain — everything painted behind the &lt;code&gt;BlockSemantics&lt;/code&gt; widget disappears from the screen reader's view.&lt;/p&gt;

&lt;p&gt;Think of a custom loading overlay. You have a list of items, the user taps "Sync," and a semi-transparent scrim with a spinner appears. You built it with a &lt;code&gt;Stack&lt;/code&gt; — no &lt;code&gt;showDialog&lt;/code&gt;, no &lt;code&gt;ModalBarrier&lt;/code&gt;. Without &lt;code&gt;BlockSemantics&lt;/code&gt;, a screen reader user can still swipe through every list item underneath the scrim. They hear content they can't interact with. Not a good experience.&lt;/p&gt;

&lt;p&gt;Here's how you do it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;Stack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="n"&gt;ListView&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="n"&gt;ListTile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;title:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Item 1'&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
        &lt;span class="n"&gt;ListTile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;title:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Item 2'&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
        &lt;span class="n"&gt;ListTile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;title:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Item 3'&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
      &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;BlockSemantics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;Container&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nl"&gt;color:&lt;/span&gt; &lt;span class="n"&gt;Colors&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;black54&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nl"&gt;alignment:&lt;/span&gt; &lt;span class="n"&gt;Alignment&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;center&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;Semantics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
          &lt;span class="nl"&gt;label:&lt;/span&gt; &lt;span class="s"&gt;'Syncing'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;CircularProgressIndicator&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;BlockSemantics&lt;/code&gt; drops every sibling painted before it from the accessibility tree. TalkBack and VoiceOver only see "Syncing." The list items are not reachable by screen readers. Here's the screencast: &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55wid9am9nc2x10bsfjt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55wid9am9nc2x10bsfjt.png" alt="Screencast of BlockSemantics" width="320" height="712"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You won't need this for standard dialogs or bottom sheets. Flutter has a built-in &lt;a href="https://api.flutter.dev/flutter/widgets/ModalBarrier-class.html" rel="noopener noreferrer"&gt;&lt;code&gt;ModalBarrier&lt;/code&gt;&lt;/a&gt;. It's out of the box in &lt;a href="https://api.flutter.dev/flutter/material/showDialog.html" rel="noopener noreferrer"&gt;&lt;code&gt;showDialog&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://api.flutter.dev/flutter/material/showModalBottomSheet.html" rel="noopener noreferrer"&gt;&lt;code&gt;showModalBottomSheet&lt;/code&gt;&lt;/a&gt;. The &lt;a href="https://api.flutter.dev/flutter/widgets/BlockSemantics-class.html" rel="noopener noreferrer"&gt;&lt;code&gt;BlockSemantics&lt;/code&gt;&lt;/a&gt; widget also has a &lt;code&gt;blocking&lt;/code&gt; property (defaults to &lt;code&gt;true&lt;/code&gt;). You can change it dynamically if you need to turn the curtain on and off based on state.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross-platform Comparison
&lt;/h3&gt;

&lt;p&gt;In SwiftUI, you can use &lt;a href="https://developer.apple.com/documentation/swiftui/view/accessibilityhidden(_:)" rel="noopener noreferrer"&gt;&lt;code&gt;.accessibilityHidden(true)&lt;/code&gt;&lt;/a&gt; to hide a view and its children from the accessibility tree. In Jetpack Compose, there is a &lt;a href="https://developer.android.com/reference/kotlin/androidx/compose/ui/semantics/package-summary#(androidx.compose.ui.Modifier).clearAndSetSemantics(kotlin.Function1)" rel="noopener noreferrer"&gt;&lt;code&gt;clearAndSetSemantics { }&lt;/code&gt;&lt;/a&gt; for that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Semantic Actions — Giving Screen Readers a Gesture Vocabulary
&lt;/h3&gt;

&lt;p&gt;A sighted user can perform a swipe gesture. A screen reader user can't do that. You have to provide alternatives to complex gestures — swipe-to-dismiss, long-press menus, drag-and-drop. &lt;/p&gt;

&lt;p&gt;One of the simplest options is to add a custom action. You expose them with &lt;code&gt;customSemanticsActions&lt;/code&gt; on the &lt;code&gt;Semantics&lt;/code&gt; widget:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;Semantics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nl"&gt;customSemanticsActions:&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;CustomSemanticsAction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;VoidCallback&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;{&lt;/span&gt;
    &lt;span class="n"&gt;CustomSemanticsAction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;label:&lt;/span&gt; &lt;span class="s"&gt;'Delete'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// TODO: delete the entry&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;ListTile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nl"&gt;title:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'Item'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each &lt;a href="https://api.flutter.dev/flutter/semantics/CustomSemanticsAction-class.html" rel="noopener noreferrer"&gt;&lt;code&gt;CustomSemanticsAction&lt;/code&gt;&lt;/a&gt; gets a label and a callback. TalkBack presents these labels in its actions menu. It also announces that actions are available. &lt;br&gt;
On Android, you can swipe up then down. On iOS, select "Actions" in the rotor, then swipe down. Look at the screencast: &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4wqxatycjqqy8r9vaat.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj4wqxatycjqqy8r9vaat.png" alt="Screencast of custom accessibility actions" width="600" height="514"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;
  
  
  Cross-platform: Custom Actions on Native
&lt;/h4&gt;

&lt;p&gt;In Jetpack Compose, you can add custom actions through the &lt;code&gt;semantics&lt;/code&gt; modifier and &lt;a href="https://developer.android.com/reference/kotlin/androidx/compose/ui/semantics/SemanticsPropertyReceiver#(androidx.compose.ui.semantics.SemanticsPropertyReceiver).customActions()" rel="noopener noreferrer"&gt;&lt;code&gt;customActions&lt;/code&gt;&lt;/a&gt; property. In SwiftUI, you use &lt;a href="https://developer.apple.com/documentation/swiftui/view/accessibilityaction(named:_:)-4nvf2" rel="noopener noreferrer"&gt;&lt;code&gt;.accessibilityAction(named:)&lt;/code&gt;&lt;/a&gt;. All three frameworks follow the same idea of callbacks and labels.&lt;/p&gt;
&lt;h3&gt;
  
  
  Live Regions
&lt;/h3&gt;

&lt;p&gt;Consider the following code snippet. It's a simple counter with increment and decrement buttons:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;Row&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="n"&gt;TextButton&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nl"&gt;onPressed:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;setState&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;_count&lt;/span&gt;&lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'−'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="si"&gt;$_count&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;TextButton&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nl"&gt;onPressed:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;setState&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;_count&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'+'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At first glance, it looks fine. The buttons work. Screen readers announce: "Button, minus. Double-tap to activate," the number, and the plus button analogously.&lt;br&gt;
If you can see the screen, you can watch the number change when you tap the buttons. But if you don't see anything, and you're using a screen reader only, &lt;br&gt;
you don't know what the current value is. You need to move the accessibility focus back and forth between the buttons and the number to adjust the counter to the value you want.&lt;br&gt;
Look at the screencast: ![Screencast of dynamic content without live region(&lt;a href="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d1palf2hdo0btlyw1iq5.png" rel="noopener noreferrer"&gt;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d1palf2hdo0btlyw1iq5.png&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;It doesn't look like a good user experience. Fortunately, there's a solution. A &lt;strong&gt;live region&lt;/strong&gt;. The concept comes from &lt;a href="https://www.w3.org/TR/wai-aria-1.2/#dfn-live-region" rel="noopener noreferrer"&gt;WAI-ARIA&lt;/a&gt; — an element that updates dynamically. Assistive technology announces it without the user moving focus there. &lt;/p&gt;

&lt;p&gt;Flutter also supports &lt;a href="https://api.flutter.dev/flutter/semantics/SemanticsProperties/liveRegion.html" rel="noopener noreferrer"&gt;live regions&lt;/a&gt;. To make a widget announce its value, wrap it in &lt;code&gt;Semantics(liveRegion: true)&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight dart"&gt;&lt;code&gt;&lt;span class="n"&gt;Row&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="nl"&gt;children:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="n"&gt;TextButton&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nl"&gt;onPressed:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;setState&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;_count&lt;/span&gt;&lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'−'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;Semantics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nl"&gt;liveRegion:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="si"&gt;$_count&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)),&lt;/span&gt;
    &lt;span class="n"&gt;TextButton&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="nl"&gt;onPressed:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;setState&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;_count&lt;/span&gt;&lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="nl"&gt;child:&lt;/span&gt; &lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;'+'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;),&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When &lt;code&gt;_count&lt;/code&gt; changes and &lt;code&gt;Text&lt;/code&gt; rebuilds, the app announces it. See it in action: &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F613y9eoydkxqsf08xvfu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F613y9eoydkxqsf08xvfu.png" alt="Screencast of dynamic content with live region" width="320" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Look at the text of the first button. You may think it's a &lt;code&gt;-&lt;/code&gt; that you can find on the standard keyboard next to the &lt;code&gt;+&lt;/code&gt; key. Nothing could be further from the truth.&lt;br&gt;
If it was &lt;code&gt;-&lt;/code&gt;, a &lt;a href="https://www.compart.com/en/unicode/U+002D" rel="noopener noreferrer"&gt;hyphen minus&lt;/a&gt;, screen readers would have announced it differently.&lt;br&gt;
For example, TalkBack says "Dash": &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5r6laroa2lk5pc5ijyx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5r6laroa2lk5pc5ijyx.png" alt="Screenshot of dash" width="711" height="342"&gt;&lt;/a&gt;&lt;br&gt;
And VoiceOver says "hyphen.": &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0cqz5bbdr8wbtkm2vuv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0cqz5bbdr8wbtkm2vuv.png" alt="Screenshot of hyphen" width="800" height="337"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that the exact results may vary depending on the screen reader (e.g., Samsung provides its own TalkBack) and the language. The correct character for a decrement button is a &lt;code&gt;−&lt;/code&gt;, &lt;a href="https://www.compart.com/en/unicode/U+2212" rel="noopener noreferrer"&gt;minus sign&lt;/a&gt; — a mathematical symbol. &lt;br&gt;
The screen readers on both platforms announce it correctly as "minus." Note that string interpolation &lt;code&gt;$_count&lt;/code&gt; isn't a good way to display numbers in the UI. You should use a &lt;a href="https://api.flutter.dev/flutter/package-intl_intl/NumberFormat-class.html" rel="noopener noreferrer"&gt;NumberFormat&lt;/a&gt; instead. In the snippet, number formatting is omitted for brevity.&lt;/p&gt;

&lt;h3&gt;
  
  
  What You've Achieved
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;ExcludeSemantics&lt;/code&gt; removes redundant nodes from the accessibility tree. It applies to its entire subtree. &lt;code&gt;BlockSemantics&lt;/code&gt; also removes nodes, but it targets siblings instead. &lt;code&gt;customSemanticsActions&lt;/code&gt; gives screen reader users alternatives to gestures and other direct touch interactions. And &lt;code&gt;Semantics(liveRegion: true)&lt;/code&gt; makes dynamic content announce itself when it changes.&lt;/p&gt;

&lt;p&gt;In the next part you'll build a fully accessible custom widget from scratch — label, value, and actions. You'll also learn about semantic flags and roles. See you there!&lt;/p&gt;

</description>
      <category>flutter</category>
      <category>a11y</category>
      <category>mobile</category>
      <category>dart</category>
    </item>
    <item>
      <title>Architecture Documentation as a First-Class Engineering Asset</title>
      <dc:creator>Alexander Tyutin</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:49:24 +0000</pubDate>
      <link>https://dev.to/gdg/architecture-documentation-as-a-first-class-engineering-asset-4a1j</link>
      <guid>https://dev.to/gdg/architecture-documentation-as-a-first-class-engineering-asset-4a1j</guid>
      <description>&lt;p&gt;How autonomous AI agents can generate a complete architecture snapshot of your microservices platform - while you do push-ups - and why that documentation becomes the most powerful input for your AI-driven quality pipeline.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can listen a podcast generated based on this publication (thanks &lt;a href="https://notebooklm.google/" rel="noopener noreferrer"&gt;NotebookLM&lt;/a&gt;):&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/d2FDslULYbw"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;




&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Architectural documentation is not a chore. When colocated with your source code and fed into an AI-powered quality pipeline, it transforms static analysis from "catching typos" into "catching systemic security failures and costly infrastructure leaks." This article documents a real experiment where an autonomous AI agent generated architecture files across a multi-service Google Cloud platform - with the human engineer largely off-screen - and what happened when that documentation gave our AI Quality Gate an entirely new perspective.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  1. The "Self-Documenting Code" Problem
&lt;/h2&gt;

&lt;p&gt;There is a persistent assumption in software engineering that well-structured code is self-explanatory. Clean functions, good variable names, and a Pylint score of 10.0/10 - surely that's enough?&lt;/p&gt;

&lt;p&gt;It is not.&lt;/p&gt;

&lt;p&gt;Code describes &lt;em&gt;how&lt;/em&gt; a system executes. Architecture documentation describes &lt;em&gt;why&lt;/em&gt; a system exists and &lt;em&gt;how&lt;/em&gt; it interacts with everything around it. Without this context layer, every automated analysis tool is operating in the dark. It sees a function, but not its role in the broader service mesh. It sees an API call, but not the security boundary it is expected to enforce.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1g9511rcfcdd3b5i8eb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg1g9511rcfcdd3b5i8eb.png" alt="Self-Documented says nothing about it in reality" width="800" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This distinction matters enormously when you introduce AI-powered tools into your engineering workflow. An LLM analyzing raw code without architectural context is like asking a senior engineer to perform a security review without access to the system design.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Generating Architecture While Doing Push-ups
&lt;/h2&gt;

&lt;p&gt;My platform runs on Google Cloud. It consists of dozens of microservices deployed on &lt;strong&gt;Cloud Run&lt;/strong&gt;, interacting via REST APIs, persisting assets to &lt;strong&gt;Google Cloud Storage&lt;/strong&gt;, and routing all AI operations through a centralized &lt;strong&gt;Vertex AI&lt;/strong&gt; gateway. A rich, well-connected system - but one where the only documentation was spread across scattered README files.&lt;/p&gt;

&lt;p&gt;I set out to change that. The goal: a standardized, machine-readable architectural snapshot for every service, committed directly to the repository.&lt;/p&gt;

&lt;p&gt;The method: &lt;strong&gt;guided autonomous agent execution&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The engineer set a direction, established the documentation standard, and then stepped back. The AI agent - powered by &lt;strong&gt;Gemini 3 Flash&lt;/strong&gt; and &lt;strong&gt;Claude Sonnet 4.6&lt;/strong&gt; running inside &lt;a href="https://antigravity.dev" rel="noopener noreferrer"&gt;Antigravity&lt;/a&gt;, an agentic AI coding assistant - took over. It autonomously inspected each service, read the source code, traced inter-service dependencies, cross-referenced existing implementations against the documentation standard, and iteratively generated structured &lt;code&gt;ARCHITECTURE.md&lt;/code&gt; files. The engineer's main activity during most of this process was physical exercise.&lt;/p&gt;

&lt;p&gt;The output was not informal notes. It was a disciplined, multi-level documentation hierarchy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;📦 platform-root
 ┣ 📜 ARCHITECTURE.md           ← Level 0: Global service mesh, topology, lifecycle status
 ┗ 📂 services
    ┣ 📂 core-ai-gateway
    ┃  ┗ 📜 ARCHITECTURE.md     ← Level 1: Security policy engine, FinOps guardrails
    ┣ 📂 orchestration-bot
    ┃  ┗ 📜 ARCHITECTURE.md     ← Level 1: Async task flow, Telegram webhook handling
    ┣ 📂 media-transcriber
    ┃  ┗ 📜 ARCHITECTURE.md     ← Level 1: Speech-to-Text pipeline, GCS asset management
    ┗ 📂 translation-engine
       ┗ 📜 ARCHITECTURE.md     ← Level 1: Structured output, multilingual routing
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each document followed a strict template:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Intent&lt;/strong&gt;: The concrete business and technical reason this service exists.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Design Principles&lt;/strong&gt;: Key trade-offs - statelessness, latency targets, fallback strategies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interaction Diagram&lt;/strong&gt;: A &lt;a href="https://mermaid.js.org/" rel="noopener noreferrer"&gt;Mermaid&lt;/a&gt; graph of service-to-service flows, security boundaries, and AI provider integrations. It may be generated by the agent and automatically drawn in Gitlab.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM Context Block&lt;/strong&gt;: A precise summary optimized for consumption by automated agents and AI reviewers.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The entire operation resulted in a navigable, cross-linked architecture map - built with minimal human cognitive effort (and with visualizations!)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pih6vrmdnzxfh9lj650.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pih6vrmdnzxfh9lj650.png" alt="Mermaid Diagram Generated by Antigravity Agent" width="800" height="723"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. The Quality Gate Awakening
&lt;/h2&gt;

&lt;p&gt;Once the documentation was committed alongside the source code, I ran a standard CI quality review using our AI-powered &lt;strong&gt;Quality Gate&lt;/strong&gt; - a service built on top of &lt;strong&gt;Gemini via Vertex AI&lt;/strong&gt;, designed to perform automated architectural and security reviews on every merge request.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;💡 What is the Quality Gate, exactly?&lt;/strong&gt;&lt;br&gt;
It is not a $100,000 enterprise SaaS platform. It is a lightweight, purpose-built microservice - part of the same platform it reviews - deployed on &lt;strong&gt;Google Cloud Run&lt;/strong&gt;. It exposes a single endpoint, receives the merge request diff from the CI pipeline, constructs an LLM prompt enriched with the repository's architectural documentation, calls &lt;strong&gt;Vertex AI (Gemini)&lt;/strong&gt;, and returns a structured JSON review report.&lt;/p&gt;

&lt;p&gt;Because it runs on Cloud Run, it starts only when a review is triggered and shuts down immediately after. &lt;strong&gt;The total monthly cost for me is a few dollars&lt;/strong&gt; - a fraction of a single human code review hour. This is a practical demonstration of the Google Cloud serverless model: pay only for the compute you actually use, and use high-intelligence AI only when it adds value.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The difference was immediately visible.&lt;/p&gt;

&lt;p&gt;Previously, without architectural context, the Quality Gate was limited to code-level analysis: style consistency, common security anti-patterns, dependency versions. Useful, but shallow.&lt;/p&gt;

&lt;p&gt;With the &lt;code&gt;ARCHITECTURE.md&lt;/code&gt; files available as context, the model could see the architecture and the code simultaneously. The result was a qualitative leap: the Quality Gate shifted from a static analysis tool into a reasoning system operating at the level of system design.&lt;/p&gt;

&lt;p&gt;It identified two critical issues within minutes - issues that had existed undetected in the codebase for months.&lt;/p&gt;




&lt;h3&gt;
  
  
  Finding 1: The Distributed Tracing Blackout
&lt;/h3&gt;

&lt;p&gt;One of our routing services included middleware that explicitly stripped incoming trace headers. On the surface, this looked like a reasonable security measure to prevent external clients from injecting trace identifiers into internal systems.&lt;/p&gt;

&lt;p&gt;The Quality Gate identified it as a critical observability violation.&lt;/p&gt;

&lt;p&gt;Because the architectural documentation described the distributed tracing standard across the mesh - including the requirement for end-to-end &lt;code&gt;X-Trace-ID&lt;/code&gt; propagation compatible with &lt;strong&gt;Google Cloud Trace&lt;/strong&gt; - the model understood that stripping these headers at the boundary did not isolate a threat. It severed the trace chain entirely. In any production incident, engineers would be unable to correlate logs across services in &lt;strong&gt;Cloud Logging&lt;/strong&gt;, turning a routine debugging session into a multi-hour forensic investigation with no &lt;strong&gt;Cloud Audit Logs&lt;/strong&gt; correlation to lean on.&lt;/p&gt;

&lt;p&gt;Security intention ✓. Systemic consequence ✗. The documentation made this contradiction visible.&lt;/p&gt;




&lt;h3&gt;
  
  
  Finding 2: The Silent Storage Leak
&lt;/h3&gt;

&lt;p&gt;A media processing service was documented as intentionally skipping cleanup of temporary assets in Google Cloud Storage after each processing job. The rationale was implicit - simplicity, no failure modes from deletion errors.&lt;/p&gt;

&lt;p&gt;The Quality Gate cross-referenced this against the documented architectural principle of data minimization and least-privilege access, and flagged it as both a security and FinOps violation.&lt;/p&gt;

&lt;p&gt;The impact: user audio files - potentially containing sensitive personal information - accumulating indefinitely in cloud storage. No lifecycle policy. No deletion trigger. Silent, compounding cost growth. An expanding attack surface with each new processing request.&lt;/p&gt;

&lt;p&gt;Neither a linter nor a code reviewer scanning functions in isolation would have flagged either of these. Both findings emerged from the intersection of code behavior and architectural intent - visible only because the documentation existed.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. The ROI Case
&lt;/h2&gt;

&lt;p&gt;This experiment produced a measurable return on investment across three dimensions:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;Without Documentation&lt;/th&gt;
&lt;th&gt;With Documentation + AI Agent&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Architecture Capture&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Senior Architect hours&lt;/td&gt;
&lt;td&gt;Agent cycle, near-zero human effort&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Review Quality&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Code-level findings&lt;/td&gt;
&lt;td&gt;System-level and policy findings&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Issue Discovery Cost&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Post-incident or audit&lt;/td&gt;
&lt;td&gt;CI/CD pipeline (minutes, pennies)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Quality Gate&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Generic, rigid enterprise tool&lt;/td&gt;
&lt;td&gt;Custom microservice, tunable per team or developer&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Three additional factors are worth noting specifically in the context of &lt;strong&gt;Google Cloud&lt;/strong&gt; platforms:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Vertex AI Token Efficiency&lt;/strong&gt;: When the Quality Gate is backed by a Gemini model, providing a structured &lt;code&gt;ARCHITECTURE.md&lt;/code&gt; reduces the tokens the model spends reconstructing system intent from raw code. Better context means cheaper, faster, and more accurate generation - directly impacting your AI compute costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cloud Run Observability&lt;/strong&gt;: The distributed tracing finding described above is particularly relevant for Cloud Run-based architectures, where services are stateless and ephemeral. Without continuous trace propagation, debugging inter-service failures on Cloud Run becomes significantly harder. The documentation made this risk explicit and catchable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Serverless Cost Model&lt;/strong&gt;: Because the Quality Gate is a Cloud Run service invoked only during CI/CD runs, there is zero idle cost. On a typical team with several merge requests per day, the entire AI-powered review pipeline costs a few dollars per month - less than a single engineering hour. This is the Google Cloud serverless model working exactly as intended: high-intelligence compute, on-demand, at minimal cost.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  5. Lessons for Platform Engineers
&lt;/h2&gt;

&lt;p&gt;The key insight from this experiment is not that AI agents write documentation faster than humans. That is expected. The key insight is that &lt;strong&gt;architecture documentation living inside the repository is a force multiplier for every automated tool that reads it&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This applies whether your automated tools are AI-powered code reviewers, compliance scanners, onboarding assistants, or infrastructure planning agents. The better the documentation, the higher the signal quality of every tool operating on top of it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical recommendations:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Colocate documentation with code.&lt;/strong&gt; A separate wiki that drifts out of sync is noise. An &lt;code&gt;ARCHITECTURE.md&lt;/code&gt; in the service directory, updated in the same commit as the code, is signal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Establish a documentation standard.&lt;/strong&gt; A consistent template (Intent, Principles, Interaction Diagram) makes documentation machine-readable, not just human-readable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Define a lifecycle status.&lt;/strong&gt; Clearly mark deprecated or inactive services. Automated agents should not use legacy code as a reference for current standards.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use agents to generate the initial draft.&lt;/strong&gt; The cognitive overhead of starting from a blank page is real. Agents are excellent at producing a structured first pass that engineers then validate and refine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feed documentation to your CI pipeline.&lt;/strong&gt; An AI quality reviewer with architectural context is a different class of tool than one without it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build your own Quality Gate - and make it yours.&lt;/strong&gt; This is the key advantage that enterprise SaaS cannot match: flexibility. A custom Cloud Run service backed by Gemini and driven by &lt;em&gt;your&lt;/em&gt; compliance rules, &lt;em&gt;your&lt;/em&gt; architectural standards, and &lt;em&gt;your&lt;/em&gt; team conventions means every developer can have a personal reviewer that understands the exact context of the project - not a generic ruleset designed for the average of all possible codebases.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  6. Conclusion
&lt;/h2&gt;

&lt;p&gt;Architecture documentation has historically been treated as optional overhead - valuable in theory, deprioritized in practice. This experiment demonstrates that when documentation is colocated with source code, follows a consistent machine-readable standard, and is kept current with the help of autonomous agents, it becomes a critical infrastructure component.&lt;/p&gt;

&lt;p&gt;It enables automated systems to reason at the level of platform design, not just code syntax. It transforms AI-powered quality gates from expensive linters into genuine architectural advisors. And it can be generated - for an entire platform - while you are doing something else entirely.&lt;/p&gt;

&lt;p&gt;The $10,000 &lt;code&gt;ARCHITECTURE.md&lt;/code&gt; is not a metaphor. It is the estimated cost differential between finding a critical architectural flaw in a 5-minute CI review versus discovering it during a production incident, a compliance audit, or a cloud storage invoice that nobody expected.&lt;/p&gt;

&lt;p&gt;Keep your architecture documented. Keep it in the repository. Let agents maintain it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Stay standardized. Stay secure.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>architecture</category>
      <category>security</category>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>Using OpenCode as a fallback agent for Antigravity</title>
      <dc:creator>Alexander Tyutin</dc:creator>
      <pubDate>Wed, 15 Apr 2026 10:45:53 +0000</pubDate>
      <link>https://dev.to/gdg/using-opencode-as-a-fallback-agent-for-antigravity-37oo</link>
      <guid>https://dev.to/gdg/using-opencode-as-a-fallback-agent-for-antigravity-37oo</guid>
      <description>&lt;p&gt;Today I was confused by Antigravity errors about high load on their services. It made my work impossible even with the cheapest model Gemini 3 Flash.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4mtae79mx16mj8oqolw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4mtae79mx16mj8oqolw.png" alt="Our servers are experiencing high traffic right now, please try again in a minute" width="680" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoivttyhrl7cic2pqa53.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faoivttyhrl7cic2pqa53.png" alt="Gemini 3 Flash is not working" width="674" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some time ago I heard something about the &lt;a href="https://opencode.a" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt;. And it was the time to try it!&lt;/p&gt;

&lt;p&gt;I've installed the opencode in my system by &lt;code&gt;brew install anomalyco/tap/opencode&lt;/code&gt; and respective extension from the marketplace.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqepj29llxc7r63pllxux.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqepj29llxc7r63pllxux.png" alt="Opencode extension for Antigravity" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have a good documentation inside the repo like described in the article &lt;a href="https://dev.to/holgerleichsenring/specification-first-agentic-development-a-methodology-for-structured-traceable-ai-assisted-la"&gt;Specification-First Agentic Development: A Methodology for Structured, Traceable AI-Assisted Development&lt;/a&gt;. So the default free OpenCode model &lt;code&gt;Big Pickle&lt;/code&gt; performed planning, reviewing and coding stage well. &lt;/p&gt;

&lt;p&gt;But then I realized that it was working without taking into the account system instruction and rules which I had for Antigravity.&lt;/p&gt;

&lt;p&gt;So I've performed calls of Antigravity assurance workflows (like &lt;a href="https://dev.to/gdg/antigravity-my-approach-to-deliver-the-most-assured-value-for-the-least-money-3iip"&gt;here&lt;/a&gt; and &lt;a href="https://dev.to/gdg/ai-powered-repository-security-check-with-antigravity-workflow-5hee"&gt;here&lt;/a&gt;) right from the OpenCode chat and it performed them perfectly.&lt;/p&gt;

&lt;p&gt;As I have a lot of workflows for linting, security check of diff and the whole repo, and especially external self-made security gateway I was sure that the quality of code produced by the OpenCode was good enough and aligned with my codebase.&lt;/p&gt;

&lt;p&gt;The only thing I can mention is a redundant file was left after some iterations of testing. But it can be fixed by a good review right after MR creation.&lt;/p&gt;

&lt;p&gt;So seems the OpenCode is a good fallback for cases when Google servers are experiencing problems. Also it can by used to save tokens for some kind of tasks.&lt;/p&gt;

</description>
      <category>antigravity</category>
      <category>development</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Gemini Thinking: How "Brainy" Models Unexpectedly Blew My Budget</title>
      <dc:creator>Alexander Tyutin</dc:creator>
      <pubDate>Mon, 13 Apr 2026 07:29:03 +0000</pubDate>
      <link>https://dev.to/gdg/gemini-thinking-how-new-brainy-models-unexpectedly-blew-my-budget-1c85</link>
      <guid>https://dev.to/gdg/gemini-thinking-how-new-brainy-models-unexpectedly-blew-my-budget-1c85</guid>
      <description>&lt;p&gt;Recently, Google notified me that the &lt;strong&gt;Gemini 2.0&lt;/strong&gt; models I was using are retiring. This was disappointing because my &lt;a href="https://t.me/oqytu_bot" rel="noopener noreferrer"&gt;charity project for Technovation Girls&lt;/a&gt;, worked perfectly and very cheaply on those models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovhd2tgryyregjlw45ht.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovhd2tgryyregjlw45ht.png" alt="gemini-2.0 retirement email" width="800" height="569"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I had to find a replacement. While Google recommended &lt;strong&gt;Gemini 3.0&lt;/strong&gt;, those models are still in "preview". Since my project needs high stability, I chose the &lt;strong&gt;Gemini 2.5&lt;/strong&gt; family, which is already in "General Availability".&lt;/p&gt;




&lt;p&gt;&lt;em&gt;You can listen a podcast generated based on this publication (thanks &lt;a href="https://notebooklm.google/" rel="noopener noreferrer"&gt;NotebookLM&lt;/a&gt;):&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/BMigB0RyBE8"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;




&lt;h3&gt;
  
  
  The Surprise: Why is it so Slow and Expensive?
&lt;/h3&gt;

&lt;p&gt;Switching was easy because I built my platform to handle model changes and fallbacks automatically. I simply updated my allowed models list and set &lt;strong&gt;gemini-2.5-flash-lite&lt;/strong&gt; as the primary choice.&lt;/p&gt;

&lt;p&gt;However, I was shocked by the results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requests took much longer to finish.&lt;/li&gt;
&lt;li&gt;The quality was barely better.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Token usage exploded&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;I saw a massive "system overhead" in my logs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Before:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0du4cn38pcwealhz6w1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0du4cn38pcwealhz6w1o.png" alt="Tokens usage before" width="262" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdm6zfqy0xgnrjlvkjlbt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdm6zfqy0xgnrjlvkjlbt.png" alt="Tokens usage after" width="326" height="250"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  The Cause: "Thinking" by Default
&lt;/h3&gt;

&lt;p&gt;After digging into the documentation, I found the reason: &lt;strong&gt;all Gemini 2.5 models are "thinking" models&lt;/strong&gt;. By default, they use as many tokens as possible to "reason" before answering.&lt;/p&gt;

&lt;p&gt;My project worked great without this extra thinking. The slight quality boost was not worth the massive increase in latency and cost. I had to find a way to stop the model from thinking "on my dime".&lt;/p&gt;

&lt;h3&gt;
  
  
  The Technical Hurdle
&lt;/h3&gt;

&lt;p&gt;I &lt;a href="https://docs.cloud.google.com/vertex-ai/generative-ai/docs/thinking" rel="noopener noreferrer"&gt;discovered&lt;/a&gt; that different models have different minimum "thinking budgets". Surprisingly, &lt;strong&gt;gemini-2.5-flash-lite&lt;/strong&gt; has a higher minimum budget (512 tokens) than the more powerful &lt;strong&gt;gemini-2.5-flash&lt;/strong&gt; (only 1 token!).&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Min Thinking Budget&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gemini 2.5 Flash Lite&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;512 tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gemini 2.5 Flash&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1 token&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Gemini 2.5 Pro&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;128 tokens&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;To fix this, I had to expand my code to calculate and limit these budgets during fallbacks. I also had to handle the new text constants (MINIMAL, MEDIUM, HIGH) used by the Gemini 3.x models.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemini-2.5-flash-lite&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model_page&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;_MODEL_GEMINI_DOCS_BASE&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/gemini/2-5-flash-lite&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;is_thinking&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;grounding_rag&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;grounding_google_search&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;count_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;supports_thinking_level&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;supports_thinking_budget&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;min_thinking_budget&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;512&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outputs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
       &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemini-2.5-flash&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model_page&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;_MODEL_GEMINI_DOCS_BASE&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/gemini/2-5-flash&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;is_thinking&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;grounding_rag&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;grounding_google_search&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;count_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;supports_thinking_level&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;supports_thinking_budget&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;min_thinking_budget&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outputs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
       &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gemini-2.5-pro&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;model_page&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;_MODEL_GEMINI_DOCS_BASE&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;/gemini/2-5-pro&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;is_thinking&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;grounding_rag&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;grounding_google_search&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;count_tokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;supports_thinking_level&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;supports_thinking_budget&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;min_thinking_budget&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;outputs&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
       &lt;span class="p"&gt;},&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  The Result
&lt;/h3&gt;

&lt;p&gt;I finally switched to &lt;strong&gt;gemini-2.5-flash&lt;/strong&gt; with a strict limit of &lt;strong&gt;50 thinking tokens&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8f5mqh3wrj6ilmijncq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8f5mqh3wrj6ilmijncq6.png" alt="Gemini thinking tokens in logs" width="296" height="246"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, response speeds are back up and costs are back down. It was a lot of unexpected work for a "simple" upgrade, but everything is running smoothly again!&lt;/p&gt;

</description>
      <category>gemini</category>
      <category>vertexai</category>
      <category>infrastructure</category>
      <category>finops</category>
    </item>
    <item>
      <title>I tried to make DevFest Ireland accessible - and ended up building a SaaS</title>
      <dc:creator>Jordan Harrison</dc:creator>
      <pubDate>Fri, 10 Apr 2026 13:18:59 +0000</pubDate>
      <link>https://dev.to/gdg/i-tried-to-make-devfest-ireland-accessible-and-ended-up-building-a-saas-1o87</link>
      <guid>https://dev.to/gdg/i-tried-to-make-devfest-ireland-accessible-and-ended-up-building-a-saas-1o87</guid>
      <description>&lt;h2&gt;
  
  
  The email I couldn't ignore
&lt;/h2&gt;

&lt;p&gt;A few months into organising DevFest Ireland 2025, I received messages from a couple of deaf developers asking if they could attend.&lt;/p&gt;

&lt;p&gt;Not in a vague "looks interesting" kind of way. They wanted to come. They wanted to sit in the talks, meet people, be part of it properly. And the question they asked was completely fair: would there be an Irish Sign Language interpreter?&lt;/p&gt;

&lt;p&gt;I didn't have a solid answer for them at the time. I thought it would be one of those things that would take a bit of organising, a few emails, some budget approval, and then get sorted. That was my naive version of it anyway.&lt;/p&gt;

&lt;p&gt;It turned out not to be like that at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trying to make it work
&lt;/h2&gt;

&lt;p&gt;Once I started looking properly, I ran into the shortage almost immediately. There simply are not enough ISL interpreters available, especially for full day events. Availability is tight, booking needs to happen early, and the logistics are harder than most people realise from the outside. For a conference, you are not just solving for one slot in a timetable. You are trying to cover a long day, with the practical reality that this kind of work cannot just be dumped onto one person and expected to somehow stretch across everything.&lt;/p&gt;

&lt;p&gt;That was the point where it stopped feeling like a normal organiser task and started feeling like a structural problem.&lt;/p&gt;

&lt;p&gt;The hardest part was that I now had real people waiting on an answer. I could not hide behind "we're looking into it" in my own head, because that still leaves someone wondering whether they can actually come.&lt;/p&gt;

&lt;h2&gt;
  
  
  It wasn't only about interpretation
&lt;/h2&gt;

&lt;p&gt;The more I sat with it, the more obvious it became that this was bigger than one accessibility request.&lt;/p&gt;

&lt;p&gt;Yes, ISL interpretation mattered. A lot. But so did transcription. And not only for deaf attendees. There are hard of hearing attendees who do not use sign language. There are people who find spoken content much easier to process when they can read along. There are neurodivergent attendees who benefit from seeing the words as well as hearing them. There are remote viewers. There are people trying to follow a technical talk delivered quickly by someone with a strong accent while half the room laughs at a reference they missed.&lt;/p&gt;

&lt;p&gt;Once I started thinking about it that way, live transcription stopped feeling like a backup option. It felt central.&lt;/p&gt;

&lt;h2&gt;
  
  
  The options were not great
&lt;/h2&gt;

&lt;p&gt;So I started talking to captioning and transcription providers.&lt;/p&gt;

&lt;p&gt;The split was basically what you would expect, but more frustrating when you are the one making the call. Human captioning looked strong. High accuracy, experienced operators, something you could trust. But it came in at the kind of price that forces a community event to think very carefully about whether it can carry it.&lt;/p&gt;

&lt;p&gt;AI captioning was somewhat easier to justify on cost - although it still ran into 4 figures(!) and the quoted level of accuracy just was not good enough for a technical conference. Not with fast speakers, different accents, library names, acronyms, product terms, and all the weird ways developers talk when they are explaining something they know too well.&lt;/p&gt;

&lt;p&gt;That was the bit that kept bothering me. The choice seemed to be either spend a lot or accept something that would let people down. That is not much of a choice if the whole point is accessibility.&lt;/p&gt;

&lt;h2&gt;
  
  
  So I built it
&lt;/h2&gt;

&lt;p&gt;At some stage, after enough comparing and researching and getting annoyed by the gap, I stopped asking who I should buy from and started asking what I actually wanted the software to do.&lt;/p&gt;

&lt;p&gt;I wanted live transcription that could keep up with technical talks. I wanted something accurate enough that a person could depend on it instead of politely pretending it was helpful. I wanted it to be affordable enough that smaller events could realistically use it.&lt;/p&gt;

&lt;p&gt;I was not thinking "this should be a SaaS." I was thinking "I need this to exist for DevFest."&lt;/p&gt;

&lt;p&gt;So I built it.&lt;/p&gt;

&lt;p&gt;It started the way a lot of things start: messy, practical, not especially glamorous. A lot of testing. A lot of fixing. A lot of checking the output and asking myself whether I would trust this if I were relying on it to follow a talk. That was the standard that mattered to me. Not whether it looked clever. Whether it actually helped.&lt;/p&gt;

&lt;h2&gt;
  
  
  From a solution for one event to VolenScribe
&lt;/h2&gt;

&lt;p&gt;That tool eventually became &lt;a href="https://volenscribe.com" rel="noopener noreferrer"&gt;VolenScribe&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;VolenScribe is the SaaS that came out of all of this. It does live AI transcription for events, and it grew directly from trying to solve this problem in a way that did not feel half baked. One of the main things I cared about from the start was accuracy, because that is where so many AI transcription products fall apart once you put them in a real room with real people, and making sure it was useful beyond English-only events. VolenScribe supports 25 spoken languages, so it can be used across the world.&lt;/p&gt;

&lt;p&gt;The AI transcription model used by VolenScribe has an average word error rate of 3.9%. That matters to me more than any vague claim about being smart or scalable or next generation or whatever else people like to say. The actual question is simpler: if someone is reading these captions, can they follow what is being said without constantly correcting the machine in their head?&lt;/p&gt;

&lt;p&gt;That is the bar.&lt;/p&gt;

&lt;h2&gt;
  
  
  What actually mattered here
&lt;/h2&gt;

&lt;p&gt;The strange thing is that I never set out to build a company around this. It came from a very specific problem, at a very specific event, because a few people asked a very reasonable question and I realised I did not have a good answer.&lt;/p&gt;

&lt;p&gt;I think that is why this has stayed with me.&lt;/p&gt;

&lt;p&gt;Accessibility is often talked about in broad, well-meaning terms, but the reality is much more direct than that. Someone wants to attend. Someone wants to follow the talk. Someone wants to feel like the event was built with them in mind too. Either they can, or they cannot.&lt;/p&gt;

&lt;p&gt;That is what all of this came down to in the end.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I'm still thinking about it
&lt;/h2&gt;

&lt;p&gt;I do not think organisers should have to choose between something expensive and something unreliable. I do not think accessibility should become one of those things people care about right up until the invoice lands. And I definitely do not think the answer should be "well, this is the best we could do" when the result still leaves people out.&lt;/p&gt;

&lt;p&gt;Volenscribe started because I could not find something I trusted enough to use.&lt;/p&gt;

&lt;p&gt;It just happened that solving that for DevFest Ireland 2025 turned into something other people needed too.&lt;/p&gt;

&lt;p&gt;And really, it all goes back to those first messages. A few deaf developers reached out because they wanted to be there. Everything that came after, including the software, came from taking that seriously.&lt;/p&gt;

</description>
      <category>a11y</category>
      <category>saas</category>
      <category>showdev</category>
      <category>startup</category>
    </item>
    <item>
      <title>AI-Powered Repository Security Check with Antigravity Workflow</title>
      <dc:creator>Alexander Tyutin</dc:creator>
      <pubDate>Mon, 06 Apr 2026 09:46:09 +0000</pubDate>
      <link>https://dev.to/gdg/ai-powered-repository-security-check-with-antigravity-workflow-5hee</link>
      <guid>https://dev.to/gdg/ai-powered-repository-security-check-with-antigravity-workflow-5hee</guid>
      <description>&lt;p&gt;When teams want to "move fast and break things," security is often the first thing they forget. I've seen a lot over 15 years in the industry. My approach is simple: follow the &lt;strong&gt;Pareto Principle (80/20)&lt;/strong&gt;. You want 80% of the security results with just 20% of the work.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can listen a podcast generated based on this publication (thanks &lt;a href="https://notebooklm.google/" rel="noopener noreferrer"&gt;NotebookLM&lt;/a&gt;):&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/XGFWEtJC-04"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;In the AI era, that 20% of work can look like a single command. &lt;/p&gt;

&lt;p&gt;Here is how we built the &lt;a href="https://antigravity.google/docs/rules-workflows" rel="noopener noreferrer"&gt;Antigravity workflow&lt;/a&gt; that checks the whole repository for security issues in several minutes. It does not cost much and does not use up all the AI's context window.&lt;/p&gt;

&lt;p&gt;Short video demo made on a real repository:&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/X0a_hwPxTS8"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  The Initial Stack
&lt;/h2&gt;

&lt;p&gt;To get a clear picture of a repository's health, one tool is not enough. We use a combination of proven, open-source scanners for the beginning:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;&lt;a href="https://github.com/gitleaks/gitleaks" rel="noopener noreferrer"&gt;Gitleaks&lt;/a&gt;&lt;/strong&gt;: To find secrets like API keys and tokens.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;&lt;a href="https://github.com/semgrep/semgrep" rel="noopener noreferrer"&gt;Semgrep&lt;/a&gt;&lt;/strong&gt;: For SCA and SAST to find bad code patterns and supply chain issues.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;&lt;a href="https://github.com/bridgecrewio/checkov" rel="noopener noreferrer"&gt;Checkov&lt;/a&gt;&lt;/strong&gt;: To check IaC security (Docker, Terraform, Kubernetes).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/google/osv-scanner" rel="noopener noreferrer"&gt;OSV-Scanner&lt;/a&gt;&lt;/strong&gt;: For SCA scan.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Inspecting their results manually takes a lot of time. And if you just send all their raw output directly to an AI, it becomes very expensive and confusing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Token Economy
&lt;/h2&gt;

&lt;p&gt;For a security review, the AI doesn't need to see every test that passed. It doesn't need to see the full abstract syntax tree. It only needs to know &lt;strong&gt;what is broken, where it is, and why it matters.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We use &lt;code&gt;jq&lt;/code&gt; to remove the extra noise. This minifying step is very important for &lt;strong&gt;Token Economy&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To increase the token savings the command (workflow) may be ran with the cheapest Gemini 3 Flash. It is more than enough to receive a high-quality base report. Then the report may be reviewed with more powered models like Gemini 3.1 Pro.&lt;/p&gt;

&lt;h3&gt;
  
  
  Example: Minifying Results
&lt;/h3&gt;

&lt;p&gt;Instead of a huge JSON file per tool, we make it small and simple. For example, here are the exact commands we use to make the results smaller:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;  1. &lt;span class="sb"&gt;`&lt;/span&gt;jq &lt;span class="s1"&gt;'[.[] | {rule: .RuleID, file: .File, line: .StartLine}]'&lt;/span&gt; gitleaks-raw.json &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; gitleaks-min.json&lt;span class="sb"&gt;`&lt;/span&gt;
  2. &lt;span class="sb"&gt;`&lt;/span&gt;jq &lt;span class="s1"&gt;'[.results[] | {rule: .check_id, file: .path, line: .start.line, severity: .extra.severity}]'&lt;/span&gt; semgrep-raw.json &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; semgrep-min.json&lt;span class="sb"&gt;`&lt;/span&gt;
  3. &lt;span class="sb"&gt;`&lt;/span&gt;jq &lt;span class="s1"&gt;'if type=="array" then map(.results.failed_checks[]) else .results.failed_checks end | [.[]? | {rule: .check_id, file: .file_path, line: .file_line_range}]'&lt;/span&gt; checkov-raw.json &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; checkov-min.json&lt;span class="sb"&gt;`&lt;/span&gt;
  4. &lt;span class="sb"&gt;`&lt;/span&gt;jq &lt;span class="s1"&gt;'[.results[]?.packages[]?.vulnerabilities[]? | {rule: .id, file: .package.name, line: "N/A", severity: ((.database_specific.severity) // "N/A")}]'&lt;/span&gt; osv-raw.json &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; osv-min.json&lt;span class="sb"&gt;`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By making the data 90% smaller, the AI stays focused on real problems. This makes the check much cheaper.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;% &lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-lh&lt;/span&gt; .security-artifacts | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'{print $5, $9}'&lt;/span&gt;

6.3K checkov-min.json
2.6M checkov-rev-004-security-20260401-201110.json
2.4K gitleaks-min.json
19K gitleaks-rev-004-security-20260401-161824.json
19K gitleaks-rev-004-security-20260401-201110.json
107B osv-min-rev-001-security-20260406-110805.json
11K osv-raw-rev-001-security-20260406-110805.json
4.3K semgrep-min.json
61K semgrep-rev-004-security-20260401-161824.json
38K semgrep-rev-004-security-20260401-201110.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The "One Command" Workflow
&lt;/h2&gt;

&lt;p&gt;We put all these steps into one Antigravity slash command: &lt;code&gt;/review-security-repo&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;When we run it, the agent does exactly this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Identifies the environment&lt;/strong&gt;: Checks for tools like &lt;code&gt;semgrep&lt;/code&gt;, &lt;code&gt;gitleaks&lt;/code&gt;, &lt;code&gt;checkov&lt;/code&gt;, &lt;code&gt;osv-scanner&lt;/code&gt; and &lt;code&gt;jq&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Executes Raw Scans&lt;/strong&gt;: Runs the scanners to get raw logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Applies Minification&lt;/strong&gt;: Uses &lt;code&gt;jq&lt;/code&gt; to strip massive metadata.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Synthesizes Findings&lt;/strong&gt;: Only reads the small files (&lt;code&gt;gitleaks-min.json&lt;/code&gt;, &lt;code&gt;semgrep-min.json&lt;/code&gt;, &lt;code&gt;checkov-min.json&lt;/code&gt;, &lt;code&gt;osv-min.json&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performs Review&lt;/strong&gt;: Checks high-risk files to find complex problems that static tools miss.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generates an Actionable Report&lt;/strong&gt;: Uses a strict Markdown structure instead of a generic summary.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Report Structure Snippet
&lt;/h3&gt;

&lt;p&gt;The workflow forces the AI to output exactly what we need, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;### [Severity] - [Vulnerability Name/Rule ID]&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Tool Source:**&lt;/span&gt; [Semgrep / Gitleaks / Checkov / Manual Architectural Review]
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Location:**&lt;/span&gt; &lt;span class="sb"&gt;`[File Name]:[Line Number]`&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Business Impact:**&lt;/span&gt; [Why this matters]
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Remediation:**&lt;/span&gt; 
  [Actionable, copy-paste code or config fix]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why This Works
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Repeatability&lt;/strong&gt;: Anyone on the team can check security without being an expert.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit Trail&lt;/strong&gt;: Every raw and minified report is moved to &lt;code&gt;.security-artifacts/&lt;/code&gt; so we can track the history.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced Hallucinations&lt;/strong&gt;: Because we give AI only the exact scanner results and small code pieces, it gives real fixes without making things up.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Full Workflow Code
&lt;/h2&gt;

&lt;p&gt;If you want to try this yourself, here is the complete code for the &lt;code&gt;/review-security-repo&lt;/code&gt; workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Security&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;review&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;of&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;the&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;repo"&lt;/span&gt;
&lt;span class="nn"&gt;---&lt;/span&gt;

&lt;span class="p"&gt;-&lt;/span&gt; Get the current branch name and current timestamp (format: YYYYMMDD-HHMMSS). Define output file as &lt;span class="sb"&gt;`security-review-[branch-name]-[timestamp].md`&lt;/span&gt;.
&lt;span class="p"&gt;-&lt;/span&gt; Check for a &lt;span class="sb"&gt;`venv`&lt;/span&gt; (or &lt;span class="sb"&gt;`.venv`&lt;/span&gt;) directory in the repository root. If found, use its binaries.
&lt;span class="p"&gt;-&lt;/span&gt; Verify if &lt;span class="sb"&gt;`semgrep`&lt;/span&gt;, &lt;span class="sb"&gt;`gitleaks`&lt;/span&gt;, &lt;span class="sb"&gt;`checkov`&lt;/span&gt;, and &lt;span class="sb"&gt;`jq`&lt;/span&gt; are installed. If missing, prompt for installation and pause until confirmed.
&lt;span class="p"&gt;-&lt;/span&gt; Execute local security scanners to capture raw audit trails:
&lt;span class="p"&gt;  1.&lt;/span&gt; &lt;span class="sb"&gt;`gitleaks detect --source . -v --report-format json --report-path gitleaks-raw.json`&lt;/span&gt;
&lt;span class="p"&gt;  2.&lt;/span&gt; &lt;span class="sb"&gt;`semgrep scan --config auto --json --output semgrep-raw.json`&lt;/span&gt;
&lt;span class="p"&gt;  3.&lt;/span&gt; &lt;span class="sb"&gt;`checkov -d . --quiet --skip-path venv -o json &amp;gt; checkov-raw.json`&lt;/span&gt;
&lt;span class="p"&gt;  4.&lt;/span&gt; &lt;span class="sb"&gt;`osv-scanner -r . --format json &amp;gt; osv-raw.json || true`&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Execute &lt;span class="sb"&gt;`jq`&lt;/span&gt; to strip massive metadata, passed checks, and AST dumps, keeping only critical fields to save context tokens:
&lt;span class="p"&gt;  1.&lt;/span&gt; &lt;span class="sb"&gt;`jq '[.[] | {rule: .RuleID, file: .File, line: .StartLine}]' gitleaks-raw.json &amp;gt; gitleaks-min.json`&lt;/span&gt;
&lt;span class="p"&gt;  2.&lt;/span&gt; &lt;span class="sb"&gt;`jq '[.results[] | {rule: .check_id, file: .path, line: .start.line, severity: .extra.severity}]' semgrep-raw.json &amp;gt; semgrep-min.json`&lt;/span&gt;
&lt;span class="p"&gt;  3.&lt;/span&gt; &lt;span class="sb"&gt;`jq 'if type=="array" then map(.results.failed_checks[]) else .results.failed_checks end | [.[]? | {rule: .check_id, file: .file_path, line: .file_line_range}]' checkov-raw.json &amp;gt; checkov-min.json`&lt;/span&gt;
&lt;span class="p"&gt;  4.&lt;/span&gt; &lt;span class="sb"&gt;`jq '[.results[]?.packages[]?.vulnerabilities[]? | {rule: .id, file: .package.name, line: "N/A", severity: ((.database_specific.severity) // "N/A")}]' osv-raw.json &amp;gt; osv-min.json`&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Read ONLY &lt;span class="sb"&gt;`gitleaks-min.json`&lt;/span&gt;, &lt;span class="sb"&gt;`semgrep-min.json`&lt;/span&gt;, &lt;span class="sb"&gt;`checkov-min.json`&lt;/span&gt;, &lt;span class="sb"&gt;`osv-min.json`&lt;/span&gt;. Filter out false positives based on repository context.
&lt;span class="p"&gt;-&lt;/span&gt; Analyze high-risk architectural files strictly for logical flaws and cross-service least-privilege violations that static tools cannot understand.
&lt;span class="p"&gt;-&lt;/span&gt; Generate the report in &lt;span class="sb"&gt;`security-review-[branch-name]-[timestamp].md`&lt;/span&gt;. DO NOT output generic summary tables. You MUST output an exhaustive, itemized list.
&lt;span class="p"&gt;-&lt;/span&gt; Use the following strict Markdown structure for the report:
  ## Executive Summary
  [Brief overview of the branch's security posture]
  ## Detailed Findings
  [Iterate through EVERY validated finding. For each finding, output:]
  ### [Severity] - [Vulnerability Name/Rule ID]
&lt;span class="p"&gt;  -&lt;/span&gt; &lt;span class="gs"&gt;**Tool Source:**&lt;/span&gt; [Semgrep / Gitleaks / Checkov / Manual Architectural Review]
&lt;span class="p"&gt;  -&lt;/span&gt; &lt;span class="gs"&gt;**Location:**&lt;/span&gt; &lt;span class="sb"&gt;`[File Name]:[Line Number]`&lt;/span&gt;
&lt;span class="p"&gt;  -&lt;/span&gt; &lt;span class="gs"&gt;**Business Impact:**&lt;/span&gt; [Why this matters]
&lt;span class="p"&gt;  -&lt;/span&gt; &lt;span class="gs"&gt;**Remediation:**&lt;/span&gt; 
    &lt;span class="p"&gt;```&lt;/span&gt;&lt;span class="nl"&gt;
&lt;/span&gt;

    [Actionable, copy-paste code or config fix]


    &lt;span class="p"&gt;```&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Create a &lt;span class="sb"&gt;`.security-artifacts/`&lt;/span&gt; directory if it does not exist. Ensure &lt;span class="sb"&gt;`.security-artifacts/`&lt;/span&gt; is appended to &lt;span class="sb"&gt;`.gitignore`&lt;/span&gt;.
&lt;span class="p"&gt;-&lt;/span&gt; Move and rename both raw and minified reports to &lt;span class="sb"&gt;`.security-artifacts/`&lt;/span&gt; to preserve the complete historical audit trail:
&lt;span class="p"&gt;  -&lt;/span&gt; &lt;span class="sb"&gt;`gitleaks-raw.json`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`.security-artifacts/gitleaks-raw-[branch-name]-[timestamp].json`&lt;/span&gt;
&lt;span class="p"&gt;  -&lt;/span&gt; &lt;span class="sb"&gt;`semgrep-raw.json`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`.security-artifacts/semgrep-raw-[branch-name]-[timestamp].json`&lt;/span&gt;
&lt;span class="p"&gt;  -&lt;/span&gt; &lt;span class="sb"&gt;`checkov-raw.json`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`.security-artifacts/checkov-raw-[branch-name]-[timestamp].json`&lt;/span&gt;
&lt;span class="p"&gt;  -&lt;/span&gt; &lt;span class="sb"&gt;`osv-raw.json`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`.security-artifacts/osv-raw-[branch-name]-[timestamp].json`&lt;/span&gt;
&lt;span class="p"&gt;  -&lt;/span&gt; &lt;span class="sb"&gt;`gitleaks-min.json`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`.security-artifacts/gitleaks-min-[branch-name]-[timestamp].json`&lt;/span&gt;
&lt;span class="p"&gt;  -&lt;/span&gt; &lt;span class="sb"&gt;`semgrep-min.json`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`.security-artifacts/semgrep-min-[branch-name]-[timestamp].json`&lt;/span&gt;
&lt;span class="p"&gt;  -&lt;/span&gt; &lt;span class="sb"&gt;`checkov-min.json`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`.security-artifacts/checkov-min-[branch-name]-[timestamp].json`&lt;/span&gt;
&lt;span class="p"&gt;  -&lt;/span&gt; &lt;span class="sb"&gt;`osv-min.json`&lt;/span&gt; -&amp;gt; &lt;span class="sb"&gt;`.security-artifacts/osv-min-[branch-name]-[timestamp].json`&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Exit execution.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What’s Next?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What tools are missing from your perfect "One Command" security check?&lt;/strong&gt; Will be happy to receive opinions on how to further optimize the token economy while expanding the security coverage.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>antigravity</category>
      <category>development</category>
      <category>security</category>
    </item>
    <item>
      <title>Scaling Product Discovery: Orchestrating AI Agent Workflows with Google Opal</title>
      <dc:creator>Sandro Moreira</dc:creator>
      <pubDate>Sun, 05 Apr 2026 14:52:06 +0000</pubDate>
      <link>https://dev.to/gdg/scaling-product-discovery-orchestrating-ai-agent-workflows-with-google-opal-2982</link>
      <guid>https://dev.to/gdg/scaling-product-discovery-orchestrating-ai-agent-workflows-with-google-opal-2982</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction: The Challenge of Relevance in Software Development&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Developing an application is one thing; creating one that users actually want to use is another. The difference lies in the depth of your understanding of the pains, desires, and unmet needs of your target audience. Too often, development begins with a solution looking for a problem.&lt;/p&gt;

&lt;p&gt;I recently embarked on a journey to optimize my product discovery process, moving from time-consuming manual methods to an AI-assisted application building workflow. My mission? To build a flow that not only generated ideas but deeply validated them against the real sentiment of the market.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Flow&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1 - The Raw Data Collection (Sources):&lt;/strong&gt; I focused on a specific topic of interest and fed NotebookLM's source feature with complex data:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User Pains: Documents extracted from forums (Reddit, communities) to investigate pains unresolved by existing solutions.&lt;/li&gt;
&lt;li&gt;Market Trends: Articles and research on emerging trends (e.g., generative AI integration, specialized niches).&lt;/li&gt;
&lt;li&gt;Competitive Analysis: Detailed reviews and descriptions of competitor applications.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2- Information Cross-Referencing and Analysis:&lt;/strong&gt; NotebookLM excelled at finding connections. By asking the model to cross-reference data, I could quickly identify patterns of complaints and resource gaps in the market.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3- Generating Strategic Insights (Prompts):&lt;/strong&gt; I used structured analysis prompts to synthesize the data into actionable insights: "Cross-reference market trends with competitor features and identify a feature that is in high demand, but has low coverage in the market."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4 - The MVP Conception (Final Prompt):&lt;/strong&gt; The final step was generating a highly refined MVP prompt, ready for development:&lt;br&gt;
"Generate an MVP documentation (including Target Audience, Value Proposition, and 3 Essential Features) that addresses pain X, aligns with trend Y, and offers feature Z (low/non-existent coverage). The goal is for this MVP to be built on Gemini 3.0."&lt;/p&gt;
&lt;h2&gt;
  
  
  Scaling Discovery with Google Opal
&lt;/h2&gt;

&lt;p&gt;The initial validation phase using NotebookLM was an insightful experiment, but it still required significant manual intervention. Now, it’s time to evolve this workflow into a truly automated engine.&lt;/p&gt;

&lt;p&gt;This led me to Google Opal, the platform designed to turn complex logic into visual, multi-step AI Workflows (or mini-apps). While the concept mirrors the functionality of AI Agents - systems that take action autonomously - Opal provides the no-code workflow pipeline for orchestrating these steps reliably.&lt;/p&gt;

&lt;p&gt;In Google Opal, the goal is to create a product discovery agent pipeline that operates continuously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1 - The Search:&lt;/strong&gt; At external sources (forums, reviews) to filter and feed new pain signals and trends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: The Analyst:&lt;/strong&gt; Analyzes data from phase 1 against competitive data to generate strategic gaps and novel feature ideas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: The Strategist:&lt;/strong&gt; Converts the strategic report into the final assets for development and business.&lt;/p&gt;

&lt;p&gt;Here is the step-by-step breakdown of how I configured the three core agents in Google Opal, turning a simple workflow into an autonomous pipeline.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Step 1: The Search (Data Ingestion)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This step is the system's eyes and ears, responsible for ingesting and pre-filtering raw market data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qqd394yxfzaxiao4jgb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qqd394yxfzaxiao4jgb.png" alt=" " width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1 - Topic of Interest:&lt;/strong&gt; &lt;em&gt;UserInput&lt;/em&gt; Initially empty node where the user enters the topic of interest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2 - Search Users, Trends and Competitors:&lt;/strong&gt; &lt;em&gt;Generate&lt;/em&gt; Nodes for search with specific prompts&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnwqgpxe98dbvt8trr6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnwqgpxe98dbvt8trr6r.png" alt=" " width="463" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpac6azihjig320i4ihqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpac6azihjig320i4ihqt.png" alt=" " width="453" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovy7pye7slqa4aebr3hb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovy7pye7slqa4aebr3hb.png" alt=" " width="454" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3 - Users Opinion, Trends and Competitors:&lt;/strong&gt; &lt;em&gt;OutPut (Google Docs)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Each output node I specified "Save to Google Docs" and set a name for the file in Advanced Settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqm7xf27epkwox99sn9e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqm7xf27epkwox99sn9e.png" alt=" " width="458" height="342"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Step 2: The Analyst (Insight Generation)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This is the brain of the operation, where raw data is turned into strategic intelligence.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87bym9ekzq5ha9pzf4lw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87bym9ekzq5ha9pzf4lw.png" alt=" " width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1 - Analyze Problems:&lt;/strong&gt; &lt;em&gt;Generate&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fautzuzqwkn89vulyjiep.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fautzuzqwkn89vulyjiep.png" alt=" " width="382" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2 - Strategic Analysis:&lt;/strong&gt; &lt;em&gt;Generate&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42r3bw3lkln04ah39aqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42r3bw3lkln04ah39aqg.png" alt=" " width="453" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3 - Pain Points:&lt;/strong&gt; &lt;em&gt;OutPut (Google Docs)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4 - Strategic Analysis:&lt;/strong&gt; &lt;em&gt;OutPut (Google Docs)&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Step 3: The Strategist (MVP &amp;amp; Pitch Generation)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The final takes the distilled strategy and turns it into deliverable assets for developers and stakeholders.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F890yojx5886xl97nndgn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F890yojx5886xl97nndgn.png" alt=" " width="800" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1 - Ideas App&lt;/strong&gt; &lt;em&gt;Generate&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9pk5lionrcjjuzi3i1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw9pk5lionrcjjuzi3i1w.png" alt=" " width="458" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2 - MVP Recommendation&lt;/strong&gt; &lt;em&gt;Generate&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzta5515gb4rq53y6dj6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzta5515gb4rq53y6dj6.png" alt=" " width="457" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3 - Generate Pitch&lt;/strong&gt; &lt;em&gt;Generate&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxhd3p8wq0d8rh94xj0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxxhd3p8wq0d8rh94xj0p.png" alt=" " width="458" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The nodes are connected to other &lt;em&gt;OutPut (Google Docs)&lt;/em&gt; (Ideas, MVP Recommendation) for creating files in Google Docs and the node Pitch has type &lt;em&gt;OutPut (Google Slides)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fll5dxwmujlj51x63pqjh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fll5dxwmujlj51x63pqjh.png" alt=" " width="458" height="231"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And finally, the last step where we add a node to generate the Prompt for Gemini 3, another with an HTML output showing the result of the entire flow execution.&lt;/p&gt;
&lt;h2&gt;
  
  
  Conclusion: From Idea to MVP in Minutes
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqlsuik3ubn0bfillf48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqlsuik3ubn0bfillf48.png" alt=" " width="781" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1 - Prompt to App:&lt;/strong&gt; &lt;em&gt;Generate&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo37cbwbn61oib96ziqhr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo37cbwbn61oib96ziqhr.png" alt=" " width="459" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2 - Prompt for Gemini 3.0:&lt;/strong&gt; &lt;em&gt;OutPut (Google Docs)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3 - Generate HTML:&lt;/strong&gt; &lt;em&gt;OutPut (Manual Layout)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvcf5kiqn7p9oq8j9fsm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frvcf5kiqn7p9oq8j9fsm.png" alt=" " width="461" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It transforms your workflow into an innovation pipeline quickly, ensuring that your next application not only meets a market need but is designed to address a real problem the market is actively complaining about.&lt;/p&gt;

&lt;p&gt;By leveraging the power of agents and workdflows, we can move from market insight to a complete, validated, and ready-to-code MVP specification in minutes, significantly de-risking the development process.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="https://opal.google/_app/?flow=drive:/1Ha9NWnl-jp_602d6Ewg5EyGWqLwMQqnu&amp;amp;amp;shared&amp;amp;amp;mode=app" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;opal.google&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


&lt;p&gt;To create the MVP, simply use the generated prompt and run it in the Build tab of ai.dev, letting Gemini build it for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx48wrq0zns8pv8rtqam.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx48wrq0zns8pv8rtqam.png" alt=" " width="800" height="504"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Want to put it into production? Use the icon in the upper corner of the built app screen to send it directly to Google Cloud.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcmwhypa6mnp9zlzqgnb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhcmwhypa6mnp9zlzqgnb.png" alt=" " width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>startup</category>
      <category>opal</category>
    </item>
    <item>
      <title>Cross-Repository Development with Antigravity</title>
      <dc:creator>Razan Fawwaz</dc:creator>
      <pubDate>Thu, 02 Apr 2026 02:26:20 +0000</pubDate>
      <link>https://dev.to/gdg/cross-repository-development-with-antigravity-26be</link>
      <guid>https://dev.to/gdg/cross-repository-development-with-antigravity-26be</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F0%2ARLyrj2QLl6pGdLmB" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F0%2ARLyrj2QLl6pGdLmB" alt="Photo by Janis Ringli on Unsplash" width="1400" height="815"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you've been sleeping on &lt;a href="https://antigravity.google" rel="noopener noreferrer"&gt;Antigravity&lt;/a&gt;, now's the time to wake up. Google's latest IDE isn't just another code editor — it comes with a built-in AI Agent Manager, meaning you can write code, run unit tests, spin up files, and execute everything directly, all while an AI assistant handles the heavy lifting alongside you. You can even plug in MCP Servers to pass richer context to the Agent.&lt;/p&gt;

&lt;p&gt;Honestly? With Antigravity, you can kick off a task and go fishing. I'm not even joking.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F0%2AhqC3vuiSbJrXE-5R" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fmiro.medium.com%2Fv2%2Fresize%3Afit%3A1400%2Fformat%3Awebp%2F0%2AhqC3vuiSbJrXE-5R" alt="Photo by Randhy Pratama on Unsplash" width="1400" height="933"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Scaling
&lt;/h2&gt;

&lt;p&gt;Out of the box, Antigravity tends to scaffold projects using fullstack frameworks like Next.js or Vite + React. For smaller projects, that's totally fine — convenient, even.&lt;/p&gt;

&lt;p&gt;But once you start scaling, a single monorepo starts to feel like a liability. The codebase bloats, separation of concerns gets blurry, and things that should be independent start bleeding into each other. Personally, I much prefer keeping frontend and backend as separate repositories. Shoutout to &lt;a href="https://x.com/isfaaghyth" rel="noopener noreferrer"&gt;Kak Ishfa&lt;/a&gt; for pushing me to think about this more seriously.&lt;/p&gt;

&lt;p&gt;Here's the catch though — Antigravity doesn't have a native &lt;code&gt;Linked Workspace&lt;/code&gt; feature. There's no built-in way to point a single Agent at two separate repositories and have it work across both seamlessly.&lt;/p&gt;

&lt;p&gt;But there's a workaround, and it's cleaner than you'd expect.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Workaround: Agent Rules
&lt;/h2&gt;

&lt;p&gt;The trick is using &lt;code&gt;Agent Rules&lt;/code&gt; — a feature that lets you define behavior the Agent always follows, regardless of what you're asking it to do.&lt;/p&gt;

&lt;p&gt;The idea is simple: open your frontend repository in Antigravity, then write a rule that tells the Agent about your backend repository's directory path. From that point on, whenever a frontend change requires a corresponding backend update, the Agent knows exactly where to go and what to touch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgvkx8u1b3u7matdofzx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgvkx8u1b3u7matdofzx.png" alt="captionless image" width="800" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's how I set it up: I have two repos, frontend and backend. In the frontend project, I added a rule that says — in plain language — "if any feature change requires backend work, update the backend repository too." The key detail is setting the activation mode to &lt;strong&gt;always on&lt;/strong&gt;, so the Agent checks the rules on every single interaction, not just when you explicitly tell it to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl57xxodj7lhdl4upoxb0.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl57xxodj7lhdl4upoxb0.jpeg" alt="captionless image" width="725" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here's a rough idea of what the rule content looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are working on a frontend project located at /path/to/frontend.
There is a paired backend project located at /path/to/backend (Go, net/http, SQLite).
Whenever a frontend change requires an API or backend change, apply those changes
to the backend project as well. Always keep both repositories in sync.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simple, readable, and it works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzscphv5w5yqa0far1e3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqzscphv5w5yqa0far1e3.png" alt="captionless image" width="398" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Putting It to the Test
&lt;/h2&gt;

&lt;p&gt;To try this out, I built a todo app — login, full CRUD, connected to a Go + net/http + SQLite backend. Both directories started completely empty.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo80hvbzqw5ddwymemw59.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo80hvbzqw5ddwymemw59.png" alt="captionless image" width="198" height="414"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once I submitted the prompt, Antigravity picked up the rules immediately and asked for approval before touching anything. After confirming, it installed the necessary packages and kicked off development — starting with the backend.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6k35qht72enwudcc6l1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6k35qht72enwudcc6l1.png" alt="captionless image" width="406" height="672"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the backend done, it seamlessly transitioned to the frontend and set up Vite + React.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjmstwfct7bzrlx3udeh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjmstwfct7bzrlx3udeh.png" alt="captionless image" width="800" height="448"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the chat stopped and the "accept all" button appeared, both projects were fully scaffolded and in sync — exactly as intended.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4vghanapm6gf5ldl51e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4vghanapm6gf5ldl51e.png" alt="captionless image" width="800" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;Is this the most elegant solution? Not really — ideally Antigravity would ship a proper &lt;code&gt;Linked Workspace&lt;/code&gt; feature. But as a workaround, Agent Rules gets the job done surprisingly well. It's flexible, easy to set up, and once it's in place you barely have to think about it.&lt;/p&gt;

&lt;p&gt;If you're working with separate repositories in Antigravity, give this a shot. It might just change how you build.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#Antigravity&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>antigravity</category>
      <category>gemini</category>
      <category>programming</category>
    </item>
    <item>
      <title>Antigravity: My Approach to Deliver the Most Assured Value for the Least Money</title>
      <dc:creator>Alexander Tyutin</dc:creator>
      <pubDate>Wed, 01 Apr 2026 05:49:34 +0000</pubDate>
      <link>https://dev.to/gdg/antigravity-my-approach-to-deliver-the-most-assured-value-for-the-least-money-3iip</link>
      <guid>https://dev.to/gdg/antigravity-my-approach-to-deliver-the-most-assured-value-for-the-least-money-3iip</guid>
      <description>&lt;p&gt;As I'm not a professional developer but a guy who needs to use automation to get things done, I follow one main rule: keep it simple. Overengineering hurts. I use the Pareto rule—spend 20% of the effort to get 80% of the result. &lt;/p&gt;

&lt;p&gt;When I use AI agents like Antigravity, my goal is not to let the AI write complex code that no one can read. My goal is to build simple, secure features fast. At the same time, I control costs by saving tokens. Here is the exact workflow I use.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;You can listen a podcast generated based on this publication (thanks &lt;a href="https://notebooklm.google/" rel="noopener noreferrer"&gt;NotebookLM&lt;/a&gt;):&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/DkDfPMzXDXk"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  The Token Economy Strategy
&lt;/h2&gt;

&lt;p&gt;LLM tokens cost money. Using a smart, expensive model just to fix code spaces is not worth the cost. I change models based on how hard the task is.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High-Tier Models:&lt;/strong&gt; They are for the big tasks: planning architecture, writing complex business logic, checking security, and counting cloud costs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low-Tier Models:&lt;/strong&gt; These folks are for simple tasks: fixing syntax errors, aligning code to Pylint, and writing standard code pieces.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cy5dvgylzf3892p588z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cy5dvgylzf3892p588z.png" alt="Combining Models" width="800" height="570"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Decomposition &amp;amp; In-Repo Architecture
&lt;/h2&gt;

&lt;p&gt;Large prompts can break LLMs. If a prompt has too much text, the AI gets confused and wastes tokens. To stop this, I break every task into small, separate pieces so the AI only sees what it needs.&lt;/p&gt;

&lt;p&gt;I store all architecture plans and tasks inside the code repository (for example, &lt;code&gt;./docs&lt;/code&gt;). This keeps the instructions very close to the code for the AI.&lt;/p&gt;

&lt;p&gt;Every task I write uses this strict four-part structure:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Idea:&lt;/strong&gt; The main business or tech goal. &lt;em&gt;Why it matters:&lt;/em&gt; It proves the task is useful before I spend tokens for delivering a code to review.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plan:&lt;/strong&gt; The technical blueprint. &lt;em&gt;Why it matters:&lt;/em&gt; It locks down the plan, keeps security high, and stops the AI from inventing bad solutions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;What Was Done:&lt;/strong&gt; A short log of the work. &lt;em&gt;Why it matters:&lt;/em&gt; It gives future AI tasks a quick summary, so the AI does not have to read every code file again.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debt:&lt;/strong&gt; A list of any technical shortcuts or "crutches" used to save time. &lt;em&gt;Why it matters:&lt;/em&gt; Hidden debt ruins the project. &lt;strong&gt;Important: My custom Quality Gate checks this section. If it finds unapproved shortcuts in the code, it blocks the release completely.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  System Instructions for the AI
&lt;/h2&gt;

&lt;p&gt;To keep the AI agent aligned with the goals, I pass strict system instructions on every run. It never lets the model guess my coding standards. Here are the core rules enforced:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No Crutches:&lt;/strong&gt; Any "crutch" or technical shortcut must be approved by me. Then, the AI must document it as technical debt in the project files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No Inventing Wheels:&lt;/strong&gt; I try hard to avoid this. If a working approach already exists in another project, the AI reuses it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learn from the Past:&lt;/strong&gt; When building a new service, the AI must check the old tech debt to avoid repeating past mistakes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simple Code Only:&lt;/strong&gt; The code structure should just use standard classes. I avoid "genius-level" extreme one-line code tricks or overwhelming structures.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintainability First:&lt;/strong&gt; A middle-level, part-time developer must be able to read and maintain the code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Core Workflow
&lt;/h2&gt;

&lt;p&gt;Every feature goes through a step-by-step process. I'm trying to keep security and simplicity as the main focus at each step.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Plan &amp;amp; The Plan Review
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Using a High-Tier Model.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Plan:&lt;/strong&gt; Defining the code structure, the security rules, the cost limits, etc. I make sure not to add to old technical debt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review:&lt;/strong&gt; I look at the plan with a "fresh eye." I do not start coding until the plan is clear with main code snippets planned.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Code &amp;amp; Code Review
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Using a Low or Mid-Tier Model for code and Mid or High-Tier Model for review.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code:&lt;/strong&gt; Implement the code exactly as planned. Use clear classes and avoid complex, one-line code tricks. A middle-level developer must be able to maintain it easily.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Review:&lt;/strong&gt; Make sure the code matches the rest of the project. I prefer another "person" to check it before I call it done.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykhd3l9aeogbmard3itc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fykhd3l9aeogbmard3itc.png" alt="Local Workflow" width="800" height="715"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Lint &amp;amp; Quality Gate
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;Using Free External Tools &amp;amp; A Custom Nanoservice.&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lint:&lt;/strong&gt; I do not pay LLMs to fix missing spaces. I use free tools like &lt;code&gt;autopep8&lt;/code&gt;, &lt;code&gt;ruff&lt;/code&gt;, and &lt;code&gt;pylint&lt;/code&gt; to save tokens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quality Gate:&lt;/strong&gt; I built a simple nanoservice using the Vertex API. It checks the code changes against the &lt;code&gt;main&lt;/code&gt; branch. It works like an automatic review from the CTO, CISO, and CFO. It checks every line for good architecture, proper security access, and cost impact before the code goes to production. &lt;strong&gt;Why is it so important?&lt;/strong&gt; The Quality Gate is not overwhelmed by the full chat history inside the IDE. Its "fresh eye" often finds architectural and coding flaws that were missed by the IDE models, even after 6 to 9 rounds of review.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6z0v40pupvxl3w2lj2f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6z0v40pupvxl3w2lj2f.png" alt="Quality Gate at Work" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4uxzhcfoe69tdzouboe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4uxzhcfoe69tdzouboe.png" alt="Full Workflow" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;AI coding is not magic. In my experience, it requires a strict testing gate, smart model swapping, and simple design. By owning the process and letting the AI act as a typist, it is possible to ship secure code fast. I share this approach for an open discussion on how we can build better automation.&lt;/p&gt;

</description>
      <category>antigravity</category>
      <category>development</category>
      <category>automation</category>
      <category>responsibleai</category>
    </item>
  </channel>
</rss>
