<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Oni</title>
    <description>The latest articles on DEV Community by Oni (@onirestart).</description>
    <link>https://dev.to/onirestart</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/onirestart"/>
    <language>en</language>
    <item>
      <title>The Cloud Just Got a Brain. Google NEXT '26 Changed Everything.</title>
      <dc:creator>Oni</dc:creator>
      <pubDate>Tue, 28 Apr 2026 16:06:00 +0000</pubDate>
      <link>https://dev.to/onirestart/the-cloud-just-got-a-brain-google-next-26-changed-everything-5fhk</link>
      <guid>https://dev.to/onirestart/the-cloud-just-got-a-brain-google-next-26-changed-everything-5fhk</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-cloud-next-2026-04-22"&gt;Google Cloud NEXT Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpxjtst24eil1zgvyqgj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpxjtst24eil1zgvyqgj.png" alt="Cover" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;"The Agentic Cloud."&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Two words. Thomas Kurian said them. And 40,000 developers in Las Vegas went quiet.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;I set an alarm for a keynote.&lt;/p&gt;

&lt;p&gt;That has never happened before in my life.&lt;/p&gt;

&lt;p&gt;But April 22, 2026 felt different. &lt;strong&gt;Google Cloud NEXT '26&lt;/strong&gt; was not just another product dump. It was a declaration. A line drawn in the sand.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The cloud stopped being a storage locker. It became a thinking, acting, collaborating system.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Here is what happened -- and why it matters to every developer alive right now.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzs0vzzfds0vgl2kk8bl.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzs0vzzfds0vgl2kk8bl.gif" alt="Mind blown gif" width="350" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;// CHAPTER ONE&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Two Words That Started It All&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;code&gt;"The Agentic Cloud."&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Not AI-powered. Not AI-assisted.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Agentic.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Meaning: the cloud does not wait for you anymore.&lt;/p&gt;

&lt;p&gt;It &lt;em&gt;plans&lt;/em&gt;. It &lt;em&gt;acts&lt;/em&gt;. It &lt;em&gt;corrects itself&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;You give it a goal. It builds the path.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Think of it this way -- you stop being the driver. You become the destination.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is Google's bet for the next decade of computing. And after watching the entire keynote, I think they are right.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;// CHAPTER TWO&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vertex AI is Gone. Meet Its Replacement.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxuvg2xf9pcylz0tcxnu.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxuvg2xf9pcylz0tcxnu.gif" alt="Transformation gif" width="400" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;code&gt;Vertex AI&lt;/code&gt; -- the platform millions of developers have been using -- got a full rebrand and rebuild.&lt;/p&gt;

&lt;p&gt;It is now called the &lt;strong&gt;Gemini Enterprise Agent Platform&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And it is &lt;em&gt;not&lt;/em&gt; just a name change.&lt;/p&gt;

&lt;p&gt;Here is what actually changed:&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;What Was There&lt;/th&gt;
&lt;th&gt;What Is There Now&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Vertex AI (single brand)&lt;/td&gt;
&lt;td&gt;Gemini Enterprise Agent Platform&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;~30 model options&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;200+ models&lt;/strong&gt; including Claude&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Manual API integrations&lt;/td&gt;
&lt;td&gt;Managed &lt;strong&gt;MCP servers&lt;/strong&gt; built-in&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Isolated agents&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Agent2Agent (A2A) protocol&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;A2A protocol&lt;/strong&gt; is the quietly massive one.&lt;/p&gt;

&lt;p&gt;It means agents built on Gemini can &lt;em&gt;talk&lt;/em&gt; to agents built on Claude. Or any other model. In an open, standardized way.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Before A2A:
Gemini Agent ----X---- Claude Agent   (incompatible)

# After A2A:
Gemini Agent &amp;lt;---A2A---&amp;gt; Claude Agent &amp;lt;---A2A---&amp;gt; Your Custom Agent
               seamless. open. production-ready.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No more building walls between your AI tools.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The whole ecosystem just became one conversation.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;// CHAPTER THREE&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Agent That Browses the Internet For You&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqkxff3rxx9tsl2utxqe.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqkxff3rxx9tsl2utxqe.gif" alt="Surfing the web gif" width="360" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;This is the one I could not stop thinking about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Mariner.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Google DeepMind built an AI agent that &lt;em&gt;uses the web the way you do&lt;/em&gt; -- it reads pages, clicks buttons, fills forms, and completes tasks.&lt;/p&gt;

&lt;p&gt;The numbers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;83.5%&lt;/strong&gt; score on the WebVoyager benchmark&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10 tasks running simultaneously&lt;/strong&gt; on cloud VMs&lt;/li&gt;
&lt;li&gt;Handles shopping, research, form-filling -- all in the background&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;You open your laptop. You tell Mariner what you need. You go make chai. It is done by the time you are back.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The roadmap is already laid out:&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Q2 2026&lt;/strong&gt; -- &lt;code&gt;Mariner Studio&lt;/code&gt; launches (visual builder for web agents)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3 2026&lt;/strong&gt; -- Cross-device sync (your agents follow you everywhere)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q4 2026&lt;/strong&gt; -- Agent marketplace (buy, sell, and share agents like apps)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;This is not a research project anymore. This is &lt;em&gt;infrastructure&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;// CHAPTER FOUR&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Chip Built For This Exact Moment&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwfnyr2y5yedqlegin0z.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhwfnyr2y5yedqlegin0z.gif" alt="Rocket launch gif" width="500" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Every software revolution needs new hardware under it.&lt;/p&gt;

&lt;p&gt;Google delivered.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8th generation TPUs.&lt;/strong&gt; Two flavors, each purpose-built:&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;&lt;code&gt;TPU 8t&lt;/code&gt;&lt;/em&gt;&lt;/strong&gt; -- built for &lt;strong&gt;training&lt;/strong&gt;. Frontier models. The heavy lifting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;&lt;code&gt;TPU 8i&lt;/code&gt;&lt;/em&gt;&lt;/strong&gt; -- built for &lt;strong&gt;inference&lt;/strong&gt;. Real-time. Low latency. Production workloads.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Both run on Google's own &lt;strong&gt;Axion ARM-based processors&lt;/strong&gt; for the first time. Chip-to-API, fully co-designed.&lt;/p&gt;

&lt;p&gt;The result is hard to argue with:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Gemini 2.0 Flash&lt;/em&gt;&lt;/strong&gt; achieves &lt;strong&gt;24x higher intelligence per dollar&lt;/strong&gt; vs GPT-4o.&lt;/p&gt;

&lt;p&gt;And &lt;strong&gt;5x higher&lt;/strong&gt; than DeepSeek R1.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;em&gt;That is not a benchmark slide. That is a cost structure shift for every team building on AI.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;// CHAPTER FIVE&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One App. Every Employee. No More Tool Chaos.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhjbc01d3kndr3k0yprh.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhjbc01d3kndr3k0yprh.gif" alt="Teamwork gif" width="370" height="208"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Google also killed the fragmentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agentspace&lt;/strong&gt; got absorbed. The result is a unified product called &lt;strong&gt;Gemini Enterprise&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;One interface. For everyone. For everything.&lt;/p&gt;

&lt;p&gt; &lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# The old reality for enterprise teams:
&lt;/span&gt;
&lt;span class="n"&gt;search&lt;/span&gt;   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GoogleSearch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;assistant&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;DuetAI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;agents&lt;/span&gt;    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;VertexAgentspace&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;connectors&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ManualAPISetup&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;  &lt;span class="c1"&gt;# weeks of work
&lt;/span&gt;
&lt;span class="c1"&gt;# The new reality:
&lt;/span&gt;
&lt;span class="n"&gt;gemini_enterprise&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GeminiEnterprise&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="c1"&gt;# search + assistant + agents + 50+ connectors
# all in one. ships in days.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;It connects out of the box to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Confluence&lt;/strong&gt; and &lt;strong&gt;Jira&lt;/strong&gt; (for eng teams)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SharePoint&lt;/strong&gt; and &lt;strong&gt;ServiceNow&lt;/strong&gt; (for enterprise ops)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BigQuery&lt;/strong&gt; (for data teams)&lt;/li&gt;
&lt;li&gt;And more rolling out through 2026&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;No custom integrations. No brittle webhooks. Just connect and go.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;// INTERLUDE&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Announcement Everyone Missed&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faylf03zfyrgg4s28jljw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faylf03zfyrgg4s28jljw.gif" alt="Secret gif" width="300" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Everyone was talking about Mariner and the A2A protocol.&lt;/p&gt;

&lt;p&gt;Almost nobody mentioned &lt;strong&gt;managed MCP servers&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;MCP stands for &lt;strong&gt;Model Context Protocol&lt;/strong&gt;. It is the standard that lets AI models securely plug into your data and tools.&lt;/p&gt;

&lt;p&gt;Google is now offering &lt;em&gt;managed&lt;/em&gt; MCP servers for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Google Security Operations&lt;/li&gt;
&lt;li&gt;Google Workspace&lt;/li&gt;
&lt;li&gt;BigQuery&lt;/li&gt;
&lt;li&gt;More through 2026&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What this means in practice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Old way -- weeks of custom integration work:&lt;/span&gt;
curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST https://your-api.com/endpoint &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Bearer &lt;/span&gt;&lt;span class="nv"&gt;$TOKEN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"query": "show me threats from last week"}'&lt;/span&gt;
&lt;span class="c"&gt;# ...debug, retry, maintain forever&lt;/span&gt;

&lt;span class="c"&gt;# New way -- one line, managed, secure, done:&lt;/span&gt;
agent.connect&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;mcp_server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"google-security-ops"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
agent.ask&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Summarize critical threats from the last 7 days"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Security teams will sleep better. Developers will ship faster.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the unsexy announcement that changes the most workflows.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;// CHAPTER SIX&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Real Take. No Hype.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4rrcopfkefk3a2jhln0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4rrcopfkefk3a2jhln0.gif" alt="Thinking gif" width="500" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;I have been building with AI since GPT-3 in 2020.&lt;/p&gt;

&lt;p&gt;I have watched the hype cycles.&lt;/p&gt;

&lt;p&gt;I know the difference between a keynote flex and an actual shift.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;This felt like an actual shift.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here is why I believe it:&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The stack is complete&lt;/strong&gt; -- for the first time, hardware (TPU 8t/8i) + runtime (Gemini Agent Platform) + protocol (A2A + MCP) + interface (Gemini Enterprise) are all aligned and &lt;em&gt;shipping together&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The open standards are real&lt;/strong&gt; -- A2A and MCP are not proprietary traps. Google is betting on ecosystem growth. That is a confident, mature move.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The DX is genuinely better&lt;/strong&gt; -- 200+ models, no-code builders &lt;em&gt;and&lt;/em&gt; pro-code APIs, managed infra for the messy parts. This is developer-first done properly.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;But I am still watching a few things closely:&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;A2A in production&lt;/strong&gt; -- beautiful in theory. I want to see Claude agents and Gemini agents passing real complex state at scale before I fully trust it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mariner error handling&lt;/strong&gt; -- 10 concurrent tasks is cool. What happens at 1,000 when one fails mid-flow?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP governance in regulated industries&lt;/strong&gt; -- healthcare and finance will ask very hard questions about access logs, data residency, and auditability.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;These are not dealbreakers. They are the right questions.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Skepticism is how we build better things.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;// START HERE&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Your Getting Started Map&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls2otyqir0wlr4gahnur.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls2otyqir0wlr4gahnur.gif" alt="Let's go gif" width="480" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;For Agent Builders:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Explore the &lt;a href="https://cloud.google.com/gemini-enterprise" rel="noopener noreferrer"&gt;Gemini Enterprise Agent Platform&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Read the &lt;a href="https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/" rel="noopener noreferrer"&gt;A2A Protocol spec&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Try the no-code agent builder inside Google Workspace today&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;For ML Engineers:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check out the &lt;a href="https://cloud.google.com/resources/tpu-interest" rel="noopener noreferrer"&gt;8th Gen TPU details&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Run your inference workload benchmarks against Gemini 2.0 Flash&lt;/li&gt;
&lt;li&gt;Explore the 200+ model catalog -- Claude, Gemini, and more in one place&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;For Security Teams:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Look into MCP server support for &lt;a href="https://cloud.google.com/security" rel="noopener noreferrer"&gt;Google Security Operations&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Start building custom security agents with the new agent builder&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;// END CREDITS&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Personal Note&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;I build AI tools for a living.&lt;/p&gt;

&lt;p&gt;I watch this space every single day.&lt;/p&gt;

&lt;p&gt;But this morning -- April 22, 2026, watching Thomas Kurian on a livestream from Kolkata -- I felt something I do not feel often.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The ground got more solid.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The stack that developers have been waiting for is finally here. The protocols are open. The hardware is purpose-built. The developer experience is genuinely better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;The agentic era did not announce itself with fireworks.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;It arrived quietly, in Las Vegas, on a Tuesday morning.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And I think the developers who start building on it today are going to look very smart in 18 months.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgagro6i73fjmz1sg5umq.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgagro6i73fjmz1sg5umq.gif" alt="Standing ovation gif" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What was the announcement that hit you hardest from NEXT '26?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Drop it below. I am reading every comment.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/11PBno-cJ1g"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>cloudnextchallenge</category>
      <category>googlecloud</category>
      <category>ai</category>
    </item>
    <item>
      <title>I watched AI Agents Take Over the Cloud Live from Google NEXT '26, and Nothing Will Be the Same</title>
      <dc:creator>Oni</dc:creator>
      <pubDate>Sat, 25 Apr 2026 13:09:00 +0000</pubDate>
      <link>https://dev.to/onirestart/i-watched-ai-agents-take-over-the-cloud-live-from-google-next-26-and-nothing-will-be-the-same-4l0b</link>
      <guid>https://dev.to/onirestart/i-watched-ai-agents-take-over-the-cloud-live-from-google-next-26-and-nothing-will-be-the-same-4l0b</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-cloud-next-2026-04-22"&gt;Google Cloud NEXT Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3y6rv2w78o504a4xu0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd3y6rv2w78o504a4xu0v.png" alt="Imag iption" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Las Vegas, April 22, 2026. The lights are bright. The room holds thousands of developers. And up on stage, Google is about to change the way we think about software forever.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Morning Everything Shifted
&lt;/h2&gt;

&lt;p&gt;I woke up at 6:30 AM just to watch a keynote.&lt;/p&gt;

&lt;p&gt;That sentence alone should tell you something.&lt;/p&gt;

&lt;p&gt;I have watched hundreds of tech keynotes over the years. Product launches. "One more thing" moments. Slides full of benchmarks. But &lt;strong&gt;Google Cloud NEXT '26&lt;/strong&gt; felt different from the first minute.&lt;/p&gt;

&lt;p&gt;Google CEO Thomas Kurian walked onto the stage and said two words that set the tone for everything that followed:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;"The Agentic Cloud."&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Not AI-assisted. Not AI-powered. &lt;em&gt;Agentic.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Meaning: the cloud does not just store your data or run your code anymore. It &lt;strong&gt;acts&lt;/strong&gt;. It decides. It coordinates. It works while you sleep.&lt;/p&gt;

&lt;p&gt;This is the shift I have been waiting to see articulated clearly, and Google just drew the map.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Even Is an "Agentic Cloud"
&lt;/h2&gt;

&lt;p&gt;Let me break this down in plain language before we go deep.&lt;/p&gt;

&lt;p&gt;Traditional cloud = you write code, deploy it, it runs when called.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agentic cloud&lt;/strong&gt; = you describe a &lt;em&gt;goal&lt;/em&gt;, and a network of AI agents figures out the steps, executes them, monitors results, and corrects itself.&lt;/p&gt;

&lt;p&gt;Think of it like the difference between hiring a contractor who waits for your instructions versus hiring a project manager who runs the whole thing and only loops you in when needed.&lt;/p&gt;

&lt;p&gt;Google is betting the entire next era of cloud computing on this model.&lt;/p&gt;

&lt;p&gt;And based on what they showed at NEXT '26, the bet is already paying off.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Announcements That Stopped Me Mid-Coffee
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Vertex AI is Dead. Long Live the Gemini Enterprise Agent Platform.
&lt;/h3&gt;

&lt;p&gt;This was the biggest rename in Google Cloud history, and it was not just cosmetic.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Vertex AI&lt;/code&gt; has been rebranded and rebuilt as the &lt;strong&gt;Gemini Enterprise Agent Platform&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;What changed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Over &lt;strong&gt;200 models&lt;/strong&gt; available, including third-party ones like &lt;strong&gt;Anthropic's Claude&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;A visual, &lt;strong&gt;no-code agent builder&lt;/strong&gt; for Google Workspace&lt;/li&gt;
&lt;li&gt;Managed &lt;strong&gt;MCP (Model Context Protocol) servers&lt;/strong&gt; across Google Cloud services&lt;/li&gt;
&lt;li&gt;Production-grade &lt;strong&gt;Agent2Agent (A2A) protocol&lt;/strong&gt; for cross-platform agent communication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The A2A protocol is the piece that matters most to me as a developer.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# What A2A enables:
Agent_A (built on Gemini) &amp;lt;---&amp;gt; Agent_B (built on Claude) &amp;lt;---&amp;gt; Agent_C (custom model)
     All communicating in a shared, open standard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No more vendor lock-in at the agent layer. This is huge.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"We are leading the industry with open standards like the Agent2Agent protocol, ensuring agents can communicate and interoperate regardless of their underlying model or platform."&lt;/em&gt;&lt;br&gt;
-- Google Cloud documentation&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  2. Meet Project Mariner: The Agent That Browses the Web For You
&lt;/h3&gt;

&lt;p&gt;This one made me put my coffee down.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Mariner&lt;/strong&gt; is Google DeepMind's web-browsing AI agent, powered by &lt;strong&gt;Gemini 2.0&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Here is what it can do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scores &lt;strong&gt;83.5% on the WebVoyager benchmark&lt;/strong&gt; (the standard test for web agents)&lt;/li&gt;
&lt;li&gt;Handles &lt;strong&gt;10 concurrent tasks&lt;/strong&gt; simultaneously on cloud-based virtual machines&lt;/li&gt;
&lt;li&gt;Automates shopping, form-filling, and information retrieval&lt;/li&gt;
&lt;li&gt;Runs &lt;strong&gt;in the background&lt;/strong&gt; while you do other work&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The roadmap alone is exciting:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Quarter&lt;/th&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Q2 2026&lt;/td&gt;
&lt;td&gt;Mariner Studio (visual builder)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Q3 2026&lt;/td&gt;
&lt;td&gt;Cross-device synchronization&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Q4 2026&lt;/td&gt;
&lt;td&gt;Agent marketplace&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Imagine telling your AI: &lt;em&gt;"Book me a flight under $400, compare hotel reviews in the area, and add the best option to my calendar."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That is not science fiction anymore. That is &lt;strong&gt;Project Mariner on a Tuesday morning&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  3. The Chip That Powers It All: 8th Gen TPU
&lt;/h3&gt;

&lt;p&gt;Every software leap needs a hardware foundation.&lt;/p&gt;

&lt;p&gt;Google announced their &lt;strong&gt;8th generation TPU family&lt;/strong&gt; with two purpose-built architectures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;TPU 8t&lt;/code&gt; -- optimized for &lt;strong&gt;training&lt;/strong&gt; frontier models&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;TPU 8i&lt;/code&gt; -- optimized for &lt;strong&gt;real-time inference&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both are hosted on Google's own &lt;strong&gt;Axion ARM-based processors&lt;/strong&gt; for the first time, creating a fully co-designed stack from chip to API.&lt;/p&gt;

&lt;p&gt;This is not just a speed upgrade. It is a philosophy shift: &lt;em&gt;specialized hardware for specialized workloads&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The result: &lt;strong&gt;Gemini 2.0 Flash&lt;/strong&gt; running on this infrastructure achieves &lt;strong&gt;24x higher intelligence per dollar&lt;/strong&gt; compared to GPT-4o, and &lt;strong&gt;5x higher than DeepSeek R1&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Those numbers are hard to ignore when you are building production applications at scale.&lt;/p&gt;




&lt;h3&gt;
  
  
  4. Gemini Enterprise: One Product, Every Employee
&lt;/h3&gt;

&lt;p&gt;Google also consolidated &lt;strong&gt;Google Agentspace&lt;/strong&gt; into a unified product called &lt;strong&gt;Gemini Enterprise&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;What this means for developers and businesses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Single interface&lt;/strong&gt; for intranet search, AI assistance, and agentic workflows&lt;/li&gt;
&lt;li&gt;Prebuilt connectors for Confluence, Jira, SharePoint, ServiceNow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No-code agent creation&lt;/strong&gt; in Google Workspace&lt;/li&gt;
&lt;li&gt;Custom agents deployable in days, not months&lt;/li&gt;
&lt;li&gt;Multimodal search across all your organization's data
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# The old way:
&lt;/span&gt;&lt;span class="n"&gt;search_tool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;IntranetSearch&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;ai_assistant&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;DuetAI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;agent_builder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;VertexAI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="c1"&gt;# 3 separate products, 3 separate learning curves
&lt;/span&gt;
&lt;span class="c1"&gt;# The new way:
&lt;/span&gt;&lt;span class="n"&gt;gemini_enterprise&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GeminiEnterprise&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="c1"&gt;# One platform. Everything connected.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The consolidation removes friction. And in enterprise software, friction is the enemy of adoption.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Part Nobody Is Talking About: MCP Servers
&lt;/h2&gt;

&lt;p&gt;Buried in the announcements but enormous in impact: &lt;strong&gt;managed MCP servers&lt;/strong&gt; across Google Cloud services.&lt;/p&gt;

&lt;p&gt;MCP stands for Model Context Protocol. It is the standard that lets AI models connect to external tools and data sources in a secure, structured way.&lt;/p&gt;

&lt;p&gt;Google is now offering &lt;strong&gt;managed MCP servers&lt;/strong&gt; for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Google Security Operations&lt;/li&gt;
&lt;li&gt;Google Workspace&lt;/li&gt;
&lt;li&gt;BigQuery&lt;/li&gt;
&lt;li&gt;And more services rolling out through 2026&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means you can build a custom security agent, point it at your Google Security Operations MCP server, and it instantly has context-aware access to your threat data.&lt;/p&gt;

&lt;p&gt;No custom API integrations. No brittle webhooks. Just:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Connect your agent to Google Security Operations&lt;/span&gt;
agent.connect&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;mcp_server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"google-security-operations"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
agent.run&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Analyze all anomalies from the last 7 days and summarize critical threats"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Clean. Powerful. The kind of thing that makes security engineers sleep better.&lt;/p&gt;




&lt;h2&gt;
  
  
  My Honest Take: What This Means for Developers Like Us
&lt;/h2&gt;

&lt;p&gt;I have been building with AI tools since the early days of GPT-3. I have seen the hype cycles. I have also seen the real breakthroughs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NEXT '26 felt like a real breakthrough moment.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here is why I believe that:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The stack is finally complete.&lt;/strong&gt; Hardware (TPU 8t/8i), runtime (Gemini Enterprise Agent Platform), protocol (A2A + MCP), and interface (Gemini Enterprise app) are all aligned and shipping together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The open standards matter.&lt;/strong&gt; A2A and MCP are not proprietary lock-in plays. They are Google betting on ecosystem growth over short-term control. That is a mature, confident move.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The developer experience is genuinely better.&lt;/strong&gt; 200+ models in one place. No-code builders alongside pro-code APIs. Managed infrastructure for the messy parts. This is what developer-first actually looks like.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Things I Am Still Watching
&lt;/h2&gt;

&lt;p&gt;Not everything from NEXT '26 has me fully convinced yet.&lt;/p&gt;

&lt;p&gt;A few honest questions I am sitting with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;A2A interoperability&lt;/strong&gt; sounds great in theory. Does it hold up when Claude agents and Gemini agents are actually passing complex state to each other in production?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Project Mariner at scale&lt;/strong&gt; -- 10 concurrent tasks is impressive, but enterprise workflows often involve 100x that. What happens to error handling and recovery at that volume?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MCP server governance&lt;/strong&gt; -- who controls access, who logs what, and how does this work in regulated industries like healthcare or finance?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not dealbreakers. They are the right questions to be asking as we move from keynote excitement to production reality.&lt;/p&gt;




&lt;h2&gt;
  
  
  Getting Started Right Now
&lt;/h2&gt;

&lt;p&gt;If you want to dive in today, here is your starting map:&lt;/p&gt;

&lt;h3&gt;
  
  
  For Agent Builders
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Explore the &lt;a href="https://cloud.google.com/gemini-enterprise" rel="noopener noreferrer"&gt;Gemini Enterprise Agent Platform&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Read the &lt;a href="https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/" rel="noopener noreferrer"&gt;Agent2Agent Protocol spec&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Try the no-code agent builder in Google Workspace&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  For ML Engineers
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Check out the &lt;a href="https://cloud.google.com/resources/tpu-interest" rel="noopener noreferrer"&gt;8th Gen TPU page&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Benchmark Gemini 2.0 Flash for your inference workloads&lt;/li&gt;
&lt;li&gt;Explore the 200+ model catalog on the new platform&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  For Security Teams
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Look into MCP server support for &lt;a href="https://cloud.google.com/security" rel="noopener noreferrer"&gt;Google Security Operations&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Test the new security agent builder capabilities&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  A Personal Note
&lt;/h2&gt;

&lt;p&gt;I build AI content and tools for a living. I watch this space every single day.&lt;/p&gt;

&lt;p&gt;But watching the &lt;strong&gt;Google Cloud NEXT '26 opening keynote&lt;/strong&gt; this morning gave me a feeling I do not get often: the sense that the foundation just got a lot more solid under my feet.&lt;/p&gt;

&lt;p&gt;The agentic era is not coming. &lt;em&gt;It arrived this morning in Las Vegas.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And for developers who are ready to build on it, the tools have never been better.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What announcement from NEXT '26 has you most excited?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Drop it in the comments. I read every single one.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Tags: &lt;code&gt;googlecloud&lt;/code&gt; &lt;code&gt;gemini&lt;/code&gt; &lt;code&gt;ai&lt;/code&gt; &lt;code&gt;agents&lt;/code&gt; &lt;code&gt;devops&lt;/code&gt; &lt;code&gt;machinelearning&lt;/code&gt; &lt;code&gt;cloudnextchallenge&lt;/code&gt; &lt;code&gt;devchallenge&lt;/code&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/11PBno-cJ1g"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>cloudnextchallenge</category>
      <category>googlecloud</category>
      <category>ai</category>
    </item>
    <item>
      <title>Everyone Talked About Gemini. Nobody Talked About the Thing That Will Actually Change Your Work.</title>
      <dc:creator>Oni</dc:creator>
      <pubDate>Wed, 22 Apr 2026 19:27:22 +0000</pubDate>
      <link>https://dev.to/onirestart/everyone-talked-about-gemini-nobody-talked-about-the-thing-that-will-actually-change-your-work-lof</link>
      <guid>https://dev.to/onirestart/everyone-talked-about-gemini-nobody-talked-about-the-thing-that-will-actually-change-your-work-lof</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-cloud-next-2026-04-22"&gt;Google Cloud NEXT Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;It is 11 PM in Kolkata. The keynote ended hours ago.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;My Twitter feed is full of "Gemini is insane" and "A2A protocol is huge."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;And I am sitting here thinking about something nobody seems to be writing about.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;I build AI tools and content for a living.&lt;/p&gt;

&lt;p&gt;I have watched every major AI announcement since GPT-3 dropped in 2020.&lt;/p&gt;

&lt;p&gt;I know the difference between a slide deck flex and something that &lt;em&gt;actually&lt;/em&gt; changes how I work next Monday.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The MCP servers announcement from Google Cloud NEXT '26 is the second kind.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And almost no one is talking about it.&lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2tt0xes36pzbob90eee.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc2tt0xes36pzbob90eee.gif" alt="Wait what gif" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;before i explain -- let me tell you what i used to do&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Every time I wanted an AI agent to talk to an external service -- a database, a security dashboard, a calendar -- I had to build the bridge myself.&lt;/p&gt;

&lt;p&gt;Custom API calls. Auth tokens stored somewhere sketchy. Error handling that breaks at 2 AM. A webhook that works perfectly in staging and explodes in production.&lt;/p&gt;

&lt;p&gt;I would spend &lt;em&gt;days&lt;/em&gt; building the plumbing before I could even start building the thing I actually wanted.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Sound familiar?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That is the tax every developer pays. The invisible work. The part nobody puts in the demo.&lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0xnqbktvjuz9aq0o99b.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz0xnqbktvjuz9aq0o99b.gif" alt="Building infrastructure gif" width="480" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;what mcp actually is -- no jargon&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;MCP&lt;/strong&gt; stands for &lt;strong&gt;Model Context Protocol&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Think of it like USB-C for AI agents.&lt;/p&gt;

&lt;p&gt;Before USB-C, every device had a different port. You needed a different cable for everything. It was a mess.&lt;/p&gt;

&lt;p&gt;MCP is the standardized port. It is the agreed-upon interface that lets AI agents plug into data sources and tools &lt;em&gt;without custom wiring every single time.&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Your agent describes what it needs. The MCP server provides it. Securely. Consistently. Without you writing 200 lines of integration code.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Google did not invent MCP. But what they announced at NEXT '26 is something different.&lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;what google just changed&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Google announced &lt;strong&gt;managed MCP servers&lt;/strong&gt; -- running natively inside Google Cloud -- for:&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Service&lt;/th&gt;
&lt;th&gt;What it means for you&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Google Security Operations&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Agents can query your threat data without custom auth&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Google Workspace&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Agents read your docs, calendar, email -- securely&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;BigQuery&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Agents run analytics queries as a natural conversation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Before this: you had to &lt;em&gt;build&lt;/em&gt; the MCP server yourself, host it, maintain it, handle auth, deal with rate limits, monitor it.&lt;/p&gt;

&lt;p&gt;After this: Google runs it. You just point your agent at it and go.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# The old way -- days of work:&lt;/span&gt;
&lt;span class="c"&gt;# 1. Build OAuth flow for Google Workspace&lt;/span&gt;
&lt;span class="c"&gt;# 2. Set up token refresh logic&lt;/span&gt;
&lt;span class="c"&gt;# 3. Write endpoint wrappers for Docs, Calendar, Gmail&lt;/span&gt;
&lt;span class="c"&gt;# 4. Handle errors, retries, rate limits&lt;/span&gt;
&lt;span class="c"&gt;# 5. Deploy and monitor forever&lt;/span&gt;
&lt;span class="c"&gt;# -- 3 days minimum. Ongoing maintenance. --&lt;/span&gt;

&lt;span class="c"&gt;# The new way -- one line:&lt;/span&gt;
agent.connect&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;mcp_server&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"google-workspace"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
agent.ask&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"Summarize all unread emails from the last 48 hours and add any deadlines to my calendar"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="c"&gt;# Done. In production. Today.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;That is not a minor improvement. That is the removal of an entire category of work.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzs0vzzfds0vgl2kk8bl.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzs0vzzfds0vgl2kk8bl.gif" alt="Mind blown gif" width="350" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;why this matters more than the gemini rename&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Everyone is writing about &lt;code&gt;Vertex AI&lt;/code&gt; becoming the &lt;strong&gt;Gemini Enterprise Agent Platform&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It is a big deal. 200+ models. A2A protocol. No vendor lock-in at the agent layer. I get it.&lt;/p&gt;

&lt;p&gt;But here is my honest take:&lt;/p&gt;

&lt;p&gt;The rename changes your &lt;em&gt;options&lt;/em&gt;. Managed MCP servers change your &lt;em&gt;daily workflow&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The difference between those two things is enormous.&lt;/p&gt;

&lt;p&gt;Options sit in a docs page until you need them. Workflow changes land in your backlog on Monday morning.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;I counted. The managed MCP server for Google Security Operations alone could replace about 2 weeks of integration work I have personally done in the last year. That is real hours. Real money. Real focus time redirected toward the actual product.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;my honest critique -- because hype helps nobody&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;I want to love this completely. I cannot yet. Here is what I am still watching:&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The governance question is unanswered.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Who controls access to the MCP server? Where are the access logs? What happens when an agent reads something it should not?&lt;/p&gt;

&lt;p&gt;For teams in regulated industries -- healthcare, finance, legal -- this is not a minor concern. It is a blocker. Google has not given detailed answers yet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The pricing is still unclear.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;"Managed" usually means "metered." I do not know yet if this becomes expensive at scale. Worth watching before you architect your entire product around it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The vendor dependency is real.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MCP is an open protocol. But Google's &lt;em&gt;managed&lt;/em&gt; MCP servers are Google's infrastructure. If you build deep integrations with these, switching costs go up. Eyes open.&lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4rrcopfkefk3a2jhln0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk4rrcopfkefk3a2jhln0.gif" alt="Thinking carefully gif" width="500" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;what i am building with this&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;I am an AI content creator. I spend about 3 hours every week on a completely manual process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check analytics across 3 platforms&lt;/li&gt;
&lt;li&gt;Pull engagement numbers into a spreadsheet&lt;/li&gt;
&lt;li&gt;Write a weekly performance summary&lt;/li&gt;
&lt;li&gt;Update my content calendar&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;That is 3 hours of paste, format, paste, format.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;With a Gemini agent connected to Google Workspace via managed MCP, I can describe that workflow once and never do it manually again.&lt;/p&gt;

&lt;p&gt;Not someday. Now.&lt;/p&gt;

&lt;p&gt;That is the part that made me sit up straight during the keynote.&lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;how to start today&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;If you want to try this yourself:&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 1:&lt;/em&gt;&lt;/strong&gt; Read the &lt;a href="https://cloud.google.com/security" rel="noopener noreferrer"&gt;MCP server docs for Google Security Operations&lt;/a&gt; -- it is the most mature of the managed offerings right now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 2:&lt;/em&gt;&lt;/strong&gt; Check out the &lt;a href="https://cloud.google.com/gemini-enterprise" rel="noopener noreferrer"&gt;Gemini Enterprise Agent Platform&lt;/a&gt; -- specifically the section on managed connectors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 3:&lt;/em&gt;&lt;/strong&gt; Pick &lt;em&gt;one&lt;/em&gt; workflow in your current job that involves pulling data from somewhere and summarizing it. That is your first MCP experiment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Step 4:&lt;/em&gt;&lt;/strong&gt; Watch the &lt;a href="https://www.youtube.com/watch?v=A01DQ8_xy7Q" rel="noopener noreferrer"&gt;Developer Keynote&lt;/a&gt; -- the MCP demos are more detailed there than in the opening keynote.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Start with one workflow. One data source. One agent. See if the time savings are real for your specific case before you redesign anything.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls2otyqir0wlr4gahnur.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls2otyqir0wlr4gahnur.gif" alt="Lets go gif" width="480" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;code&gt;the one sentence that sums this up&lt;/code&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Managed MCP servers do not make AI more powerful.&lt;/p&gt;

&lt;p&gt;They make &lt;em&gt;you&lt;/em&gt; more powerful by removing the wall between AI and the data it needs to actually help you.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;That is the announcement from NEXT '26 I will still be talking about in 6 months.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the one integration you always wanted to build but never had time for the plumbing?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Drop it below. I am genuinely curious what people will do with this.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt;  &lt;iframe src="https://www.youtube.com/embed/11PBno-cJ1g"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>cloudnextchallenge</category>
      <category>googlecloud</category>
      <category>ai</category>
    </item>
    <item>
      <title>I Spent a Week Inside OpenClaw. Here Is What Broke Me (and What Blew My Mind)</title>
      <dc:creator>Oni</dc:creator>
      <pubDate>Wed, 22 Apr 2026 19:21:42 +0000</pubDate>
      <link>https://dev.to/onirestart/i-spent-a-week-inside-openclaw-here-is-what-broke-me-and-what-blew-my-mind-4746</link>
      <guid>https://dev.to/onirestart/i-spent-a-week-inside-openclaw-here-is-what-broke-me-and-what-blew-my-mind-4746</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/openclaw-2026-04-16"&gt;OpenClaw Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxuvg2xf9pcylz0tcxnu.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxuvg2xf9pcylz0tcxnu.gif" alt="OpenClaw Banner" width="400" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;I did not plan to go deep.&lt;/p&gt;

&lt;p&gt;I just wanted to build something small.&lt;/p&gt;

&lt;p&gt;Something that works.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;But then OpenClaw pulled me in.&lt;/p&gt;

&lt;p&gt;And three days became a week.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let me be honest with you first.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;I have tried a lot of open source tools.&lt;/p&gt;

&lt;p&gt;Most of them promise a lot.&lt;/p&gt;

&lt;p&gt;Most of them disappoint quietly.&lt;/p&gt;

&lt;p&gt;You spend hours on setup.&lt;/p&gt;

&lt;p&gt;You hit one weird error.&lt;/p&gt;

&lt;p&gt;You Google it.&lt;/p&gt;

&lt;p&gt;Nobody has the answer.&lt;/p&gt;

&lt;p&gt;You give up.&lt;/p&gt;

&lt;p&gt;Sound familiar.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;That is how I approached OpenClaw.&lt;/p&gt;

&lt;p&gt;With low expectations and a lot of coffee.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l826gndshno268wfy2r.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l826gndshno268wfy2r.gif" alt="gif coffee developer" width="300" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day one. The setup.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;I cloned the repo.&lt;/p&gt;

&lt;p&gt;Read the README twice.&lt;/p&gt;

&lt;p&gt;Ran the install command.&lt;/p&gt;

&lt;p&gt;It worked.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;First time.&lt;/p&gt;

&lt;p&gt;No errors.&lt;/p&gt;

&lt;p&gt;No missing dependencies.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;I sat there for a moment.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Waiting for something to break.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Nothing did.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;That moment of surprise is something I will remember. Good tooling should feel invisible. OpenClaw felt invisible.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day two. The first real build.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;I started building my core use case.&lt;/p&gt;

&lt;p&gt;An agent pipeline that could read, reason, and respond.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;OpenClaw's architecture is clean.&lt;/p&gt;

&lt;p&gt;Like, genuinely clean.&lt;/p&gt;

&lt;p&gt;Not "we cleaned it up for the docs" clean.&lt;/p&gt;

&lt;p&gt;Actually clean.&lt;/p&gt;

&lt;p&gt; &lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Setting up my first OpenClaw pipeline
&lt;/span&gt;&lt;span class="n"&gt;pipeline&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;OpenClaw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Pipeline&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_step&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;read&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;source&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;my_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_step&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reason&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_step&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;respond&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;structured&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;It ran.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;First try.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;I may have made a small sound.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68v92v82fh2886n5xez7.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68v92v82fh2886n5xez7.gif" alt="developer surprised gif" width="498" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day three. Where it broke me.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;I pushed it.&lt;/p&gt;

&lt;p&gt;I always push tools until they break.&lt;/p&gt;

&lt;p&gt;That is how you learn the real shape of something.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;I chained five steps together.&lt;/p&gt;

&lt;p&gt;Added memory.&lt;/p&gt;

&lt;p&gt;Added tool calls.&lt;/p&gt;

&lt;p&gt;Added a feedback loop.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;And it... handled it.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Not perfectly.&lt;/p&gt;

&lt;p&gt;There were edge cases.&lt;/p&gt;

&lt;p&gt;The memory layer got confused on long context.&lt;/p&gt;

&lt;p&gt;The tool call retry logic was a bit aggressive.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;But these are honest bugs.&lt;/p&gt;

&lt;p&gt;Not architectural mistakes.&lt;/p&gt;

&lt;p&gt;There is a difference.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An honest bug means the vision is right. The execution just needs time. I respect that more than polished mediocrity.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What actually blew my mind.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;The observability.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Most tools are black boxes.&lt;/p&gt;

&lt;p&gt;You send data in.&lt;/p&gt;

&lt;p&gt;You get data out.&lt;/p&gt;

&lt;p&gt;You hope for the best.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;OpenClaw gives you the inside view.&lt;/p&gt;

&lt;p&gt;Every step.&lt;/p&gt;

&lt;p&gt;Every decision.&lt;/p&gt;

&lt;p&gt;Every retry.&lt;/p&gt;

&lt;p&gt; &lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# OpenClaw trace output&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;STEP 1] &lt;span class="nb"&gt;read&lt;/span&gt; -&amp;gt; success &lt;span class="o"&gt;(&lt;/span&gt;230ms&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;STEP 2] reason -&amp;gt; retry attempt 1 -&amp;gt; success &lt;span class="o"&gt;(&lt;/span&gt;1.2s&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;STEP 3] respond -&amp;gt; success &lt;span class="o"&gt;(&lt;/span&gt;410ms&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;[&lt;/span&gt;TRACE] Total tokens: 3,420 | Cost: &lt;span class="nv"&gt;$0&lt;/span&gt;.0041
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt; &lt;/p&gt;

&lt;p&gt;I could &lt;em&gt;see&lt;/em&gt; my agent thinking.&lt;/p&gt;

&lt;p&gt;That changed how I debug.&lt;/p&gt;

&lt;p&gt;That changed how I build.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgagro6i73fjmz1sg5umq.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgagro6i73fjmz1sg5umq.gif" alt="mind blown gif" width="200" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The thing nobody talks about.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Community tools live or die by their docs.&lt;/p&gt;

&lt;p&gt;Bad docs kill good tools.&lt;/p&gt;

&lt;p&gt;I have watched it happen.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;OpenClaw's docs are written by people who use it.&lt;/p&gt;

&lt;p&gt;You can feel that.&lt;/p&gt;

&lt;p&gt;The examples are real examples.&lt;/p&gt;

&lt;p&gt;Not toy demos.&lt;/p&gt;

&lt;p&gt;Real problems.&lt;/p&gt;

&lt;p&gt;Real solutions.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;That matters more than any feature.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I built by the end of week one.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;A personal research assistant pipeline.&lt;/p&gt;

&lt;p&gt;It reads any URL.&lt;/p&gt;

&lt;p&gt;Summarizes.&lt;/p&gt;

&lt;p&gt;Extracts key points.&lt;/p&gt;

&lt;p&gt;Compares against my notes.&lt;/p&gt;

&lt;p&gt;Gives me a daily digest.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Built it in two evenings.&lt;/p&gt;

&lt;p&gt;Running it every morning now.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My honest verdict.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;OpenClaw is not perfect.&lt;/p&gt;

&lt;p&gt;But it is &lt;em&gt;pointed in the right direction.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;The architecture respects you as a developer.&lt;/p&gt;

&lt;p&gt;It does not hide complexity.&lt;/p&gt;

&lt;p&gt;It helps you manage it.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;That is rare.&lt;/p&gt;

&lt;p&gt;That is worth talking about.&lt;/p&gt;

&lt;p&gt;That is worth building on.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you are on the fence.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;Just clone it.&lt;/p&gt;

&lt;p&gt;Spend two hours.&lt;/p&gt;

&lt;p&gt;Build one small thing.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;You will know by hour three if it is for you.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt;For me, it was.&lt;/p&gt;

&lt;p&gt; &lt;/p&gt;




&lt;p&gt; &lt;/p&gt;

&lt;p&gt;&lt;em&gt;Built this as part of the DEV OpenClaw Challenge.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you are building with OpenClaw too, drop your repo below.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Would love to see what others are doing with it.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>openclawchallenge</category>
      <category>ai</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Tired of OpenClaw? I Built a Better Agent From Scratch</title>
      <dc:creator>Oni</dc:creator>
      <pubDate>Wed, 22 Apr 2026 09:14:56 +0000</pubDate>
      <link>https://dev.to/onirestart/tired-of-openclaw-i-built-a-better-agent-from-scratch-1h5g</link>
      <guid>https://dev.to/onirestart/tired-of-openclaw-i-built-a-better-agent-from-scratch-1h5g</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/openclaw-2026-04-16"&gt;OpenClaw Writing Challenge&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;It's hard to not notice all the buzz around the Claw. And the idea wasn't that hard to sell to me. "AI agent running 24/7 doing stuff", cmon, who doesn't like that?&lt;/p&gt;

&lt;p&gt;However, I had 2 bugs that prevented me from using OpenClaw with my local AI setup. Being a software engineer that I am, my first instinct was to fix that thang and send a patch on GitHub. However... That thang is nearly 600,000 lines of code and only a few weeks old at the time I tried it.&lt;/p&gt;

&lt;p&gt;It hit me immediately, this damn thing was slop-vibed into existence. Anyhow, I didn't wanna lose my mind trying to understand those 600,000 lines of code, instead I just started writing my own in Golang, including the GUI. All native, using only 120MB when idle. Web and Mobile support coming soon, Mac, Windows, and Linux for now only.&lt;/p&gt;

&lt;p&gt;Jabot, not just a bot. It has all the good stuff you need in an agentic framework. It can use the browser, code, navigate the file system, execute shell commands.&lt;/p&gt;

&lt;p&gt;It's fully featured, secure, private and most of all simple to use agentic framework. Say goodbye to setting up databases and waging wars with Node &amp;amp; Python run-times. Justabot is meant to be used by all people, not just us nerds.&lt;/p&gt;

&lt;p&gt;I attended ClawCon Michigan and really enjoyed it. The sessions gave me fresh ideas about AI agents and how to use them in real‑world projects. &lt;/p&gt;

&lt;p&gt;Meeting other builders and sharing experiences made it totally worth it.&lt;/p&gt;

&lt;p&gt;thanks, everyone ...&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>openclawchallenge</category>
      <category>ai</category>
      <category>agents</category>
    </item>
    <item>
      <title>The Rise of 'Vibe Coding': Why Your Next Side Project Might Be Your Best</title>
      <dc:creator>Oni</dc:creator>
      <pubDate>Sat, 18 Apr 2026 12:34:33 +0000</pubDate>
      <link>https://dev.to/onirestart/the-rise-of-vibe-coding-why-your-next-side-project-might-be-your-best-2i5m</link>
      <guid>https://dev.to/onirestart/the-rise-of-vibe-coding-why-your-next-side-project-might-be-your-best-2i5m</guid>
      <description>&lt;h1&gt;
  
  
  The Rise of "Vibe Coding": Why Your Next Side Project Might Be Your Best
&lt;/h1&gt;

&lt;p&gt;We’ve all been there. You have a brilliant idea for a weekend project—a niche tool for your hobby, a small automation for your workflow, or just a fun experiment. But then the "Engineering Reality" hits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Setting up the boilerplate.&lt;/li&gt;
&lt;li&gt;  Wrestling with CSS centering (still).&lt;/li&gt;
&lt;li&gt;  Configuring API endpoints.&lt;/li&gt;
&lt;li&gt;  Spending 4 hours on a bug that turns out to be a typo.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By Sunday night, the "vibe" is gone, and the project joins the graveyard of unfinished repositories.&lt;/p&gt;

&lt;p&gt;But in 2026, something has changed. We're entering the era of &lt;strong&gt;"Vibe Coding."&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is "Vibe Coding"?
&lt;/h2&gt;

&lt;p&gt;Coined by the community and popularized by recent breakthroughs in AI agents, "Vibe Coding" is the shift from focusing on the &lt;em&gt;how&lt;/em&gt; to focusing on the &lt;em&gt;what&lt;/em&gt;. It’s about maintaining the creative flow—the "vibe"—by offloading the heavy lifting of implementation to AI.&lt;/p&gt;

&lt;p&gt;It’s not about being lazy; it’s about being &lt;strong&gt;hyper-productive&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Old Way vs. The Vibe Way
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Traditional Development&lt;/th&gt;
&lt;th&gt;Vibe Coding&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Hours&lt;/strong&gt; spent on setup&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Minutes&lt;/strong&gt; to a working prototype&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stuck in the &lt;strong&gt;implementation details&lt;/strong&gt;
&lt;/td&gt;
&lt;td&gt;Focused on &lt;strong&gt;user experience and logic&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;High barrier to entry for complex features&lt;/td&gt;
&lt;td&gt;Complex features are just a &lt;strong&gt;prompt away&lt;/strong&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Many ideas, few finished projects&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Ship early, ship often&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Why This Matters for You
&lt;/h2&gt;

&lt;p&gt;As developers, our most valuable asset isn't just our ability to write code—it's our ability to &lt;strong&gt;solve problems&lt;/strong&gt;. AI has reached a point where it can handle the "coding" part remarkably well, allowing us to act as the &lt;strong&gt;Architects of Experience&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The "Bingo" Effect
&lt;/h3&gt;

&lt;p&gt;Recently, we've seen developers building hyper-niche apps—like a custom Bingo app for a specific gaming group—in under three hours for less than a dollar. This wasn't possible before without a significant time investment. Now, if you can describe it, you can build it.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Reduced Cognitive Load
&lt;/h3&gt;

&lt;p&gt;When you don't have to worry about the syntax of a library you haven't used in six months, you can focus on the &lt;em&gt;logic&lt;/em&gt; of your application. This leads to better design and fewer architectural mistakes.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Rapid Iteration
&lt;/h3&gt;

&lt;p&gt;The feedback loop is now near-instant. You can "vibe" through five different UI layouts in the time it used to take to build one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is "Vibe Coding" the Death of Engineering?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Absolutely not.&lt;/strong&gt; In fact, it's the opposite. &lt;/p&gt;

&lt;p&gt;As we rely more on AI to generate the bulk of our code, the need for &lt;strong&gt;strong foundational knowledge&lt;/strong&gt; becomes even more critical. You need to know &lt;em&gt;why&lt;/em&gt; a certain architecture works, &lt;em&gt;how&lt;/em&gt; to debug the subtle hallucinations of an AI, and &lt;em&gt;how&lt;/em&gt; to ensure security and performance.&lt;/p&gt;

&lt;p&gt;The AI is your junior developer who never sleeps; you are the Senior Lead making the final calls.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Start "Vibe Coding" Today
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Pick a Niche Problem:&lt;/strong&gt; Don't try to build the next Facebook. Build a tool that solves a problem for &lt;em&gt;you&lt;/em&gt; or a small group of people.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Use AI Agents:&lt;/strong&gt; Tools like Manus, Claude Code, and GitHub Copilot are your best friends.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Focus on the Prompt:&lt;/strong&gt; Learn to describe your intent clearly. Think in terms of inputs, outputs, and user flow.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Don't Lose the Vibe:&lt;/strong&gt; If you get stuck on a technical detail, ask the AI to explain it or solve it. Keep the momentum going.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The future of development isn't just about writing lines of code; it's about the &lt;strong&gt;speed of thought to execution&lt;/strong&gt;. "Vibe Coding" is a superpower that allows us to bring more of our ideas to life, faster than ever before.&lt;/p&gt;

&lt;p&gt;So, what's that idea you've been sitting on? Stop engineering it in your head and start &lt;strong&gt;vibe coding&lt;/strong&gt; it today.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What do you think?&lt;/strong&gt; Is "Vibe Coding" a legitimate shift in our industry, or just a fancy name for AI-assisted development? Let's discuss in the comments! 🚀&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>productivity</category>
      <category>beginners</category>
    </item>
    <item>
      <title>The AI Developer's Toolkit: Building Smart Apps with LLMs and RAG</title>
      <dc:creator>Oni</dc:creator>
      <pubDate>Thu, 12 Mar 2026 10:23:03 +0000</pubDate>
      <link>https://dev.to/onirestart/the-ai-developers-toolkit-building-smart-apps-with-llms-and-rag-3e1</link>
      <guid>https://dev.to/onirestart/the-ai-developers-toolkit-building-smart-apps-with-llms-and-rag-3e1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The landscape of software development is rapidly evolving, with Artificial Intelligence (AI) at its forefront. The surge in AI-related content on platforms like Dev.to, as evidenced by the &lt;code&gt;ai&lt;/code&gt; tag surpassing &lt;code&gt;webdev&lt;/code&gt; and &lt;code&gt;programming&lt;/code&gt; in popularity by mid-2025 [1], underscores a fundamental shift in developer focus. This isn't just about theoretical discussions; it's about practical implementation—building with Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) pipelines.&lt;/p&gt;

&lt;p&gt;This article will guide you through the process of integrating LLMs and RAG into your applications, providing a hands-on tutorial to help you build smart, context-aware AI applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding LLMs and RAG
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Large Language Models (LLMs)&lt;/strong&gt; are advanced AI models capable of understanding, generating, and manipulating human language. They are trained on vast amounts of text data, allowing them to perform tasks such as text generation, summarization, translation, and question answering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Retrieval-Augmented Generation (RAG)&lt;/strong&gt; is a technique that enhances LLMs by giving them access to external knowledge bases. When an LLM receives a query, a RAG system first retrieves relevant information from a specified data source (e.g., a database, a collection of documents) and then uses this information to generate a more accurate and contextually rich response. This approach mitigates issues like hallucination and provides more up-to-date information than what the LLM was originally trained on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Combine LLMs and RAG?
&lt;/h2&gt;

&lt;p&gt;Combining LLMs with RAG offers several significant advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Improved Accuracy:&lt;/strong&gt; By grounding responses in external, verifiable data, RAG reduces the likelihood of LLMs generating incorrect or fabricated information.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Up-to-date Information:&lt;/strong&gt; LLMs have a knowledge cutoff based on their training data. RAG allows them to access and incorporate the latest information from your knowledge base.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Reduced Hallucinations:&lt;/strong&gt; RAG provides a factual basis for responses, minimizing instances where LLMs generate confident but incorrect answers.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Domain-Specific Knowledge:&lt;/strong&gt; You can tailor the LLM's responses to specific domains by providing it with relevant, specialized documents.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Building a Simple AI Application with LLMs and RAG: A Step-by-Step Tutorial
&lt;/h2&gt;

&lt;p&gt;Let's build a basic question-answering system that uses a local knowledge base to answer queries.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  Python 3.8+&lt;/li&gt;
&lt;li&gt;  &lt;code&gt;pip&lt;/code&gt; package manager&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 1: Set up your environment
&lt;/h3&gt;

&lt;p&gt;First, create a new project directory and a virtual environment:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;ai_rag_app
&lt;span class="nb"&gt;cd &lt;/span&gt;ai_rag_app
python &lt;span class="nt"&gt;-m&lt;/span&gt; venv venv
&lt;span class="nb"&gt;source &lt;/span&gt;venv/bin/activate  &lt;span class="c"&gt;# On Windows, use `venv\Scripts\activate`&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Install necessary libraries
&lt;/h3&gt;

&lt;p&gt;We'll use &lt;code&gt;transformers&lt;/code&gt; for LLM interaction (or a similar library for a local LLM), &lt;code&gt;faiss-cpu&lt;/code&gt; for efficient similarity search (our RAG component), and &lt;code&gt;sentence-transformers&lt;/code&gt; for embedding generation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;transformers faiss-cpu sentence-transformers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Prepare your knowledge base
&lt;/h3&gt;

&lt;p&gt;Create a simple text file named &lt;code&gt;knowledge_base.txt&lt;/code&gt; with some information. For this example, let's use facts about a fictional company.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Company Name: InnovateTech Solutions
Founded: 2020
Headquarters: Silicon Valley, CA
Mission: To develop cutting-edge AI solutions for enterprise clients.
Key Products: AI-powered analytics platform, automated customer support bots.
CEO: Dr. Anya Sharma
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Create the RAG system
&lt;/h3&gt;

&lt;p&gt;Now, let's write the Python code to build our RAG system. Create a file named &lt;code&gt;app.py&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;transformers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;pipeline&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;sentence_transformers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SentenceTransformer&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;faiss&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;

&lt;span class="c1"&gt;# 1. Load Knowledge Base
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;load_knowledge_base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file_path&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;line&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;line&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;line&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strip&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;

&lt;span class="n"&gt;knowledge_base_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;knowledge_base.txt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
&lt;span class="n"&gt;knowledge_base&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;load_knowledge_base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;knowledge_base_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# 2. Create Embeddings
# Using a pre-trained sentence transformer model
&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SentenceTransformer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;all-MiniLM-L6-v2&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;knowledge_embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;knowledge_base&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# 3. Build FAISS Index
&lt;/span&gt;&lt;span class="n"&gt;dimension&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;knowledge_embeddings&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;shape&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="n"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;faiss&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;IndexFlatL2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;dimension&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;knowledge_embeddings&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;float32&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="c1"&gt;# 4. Initialize LLM (using a simple text generation pipeline for demonstration)
# In a real application, you might use a more powerful LLM API (e.g., OpenAI, Gemini)
&lt;/span&gt;&lt;span class="n"&gt;generator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;pipeline&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;text-generation&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;distilgpt2&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;ask_llm_with_rag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;top_k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Embed the query
&lt;/span&gt;    &lt;span class="n"&gt;query_embedding&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;

    &lt;span class="c1"&gt;# Search the FAISS index for relevant documents
&lt;/span&gt;    &lt;span class="n"&gt;distances&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;indices&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query_embedding&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;astype&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;float32&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;top_k&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Retrieve the most relevant document(s)
&lt;/span&gt;    &lt;span class="n"&gt;retrieved_docs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;knowledge_base&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;indices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]]&lt;/span&gt;
    &lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;retrieved_docs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Combine query and context for the LLM
&lt;/span&gt;    &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Based on the following information:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s"&gt;Answer the question: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="c1"&gt;# Generate response using LLM
&lt;/span&gt;    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_new_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;num_return_sequences&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;generated_text&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AI-powered Q&amp;amp;A System. Type &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;exit&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; to quit.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;user_query&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You: &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;user_query&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;exit&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;
        &lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;ask_llm_with_rag&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AI: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 5: Run your application
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can ask questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  "What is InnovateTech Solutions' mission?"&lt;/li&gt;
&lt;li&gt;  "Who is the CEO?"&lt;/li&gt;
&lt;li&gt;  "When was the company founded?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The application will retrieve relevant information from &lt;code&gt;knowledge_base.txt&lt;/code&gt; and use the LLM to formulate an answer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Integrating LLMs with RAG pipelines empowers developers to build more accurate, reliable, and context-aware AI applications. As the AI landscape continues to evolve, mastering these techniques will be crucial for creating innovative solutions. The data from Dev.to clearly indicates a strong and growing interest in practical AI implementation, making this a highly relevant skill for any modern developer.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;[1] Marina Eremina. "I Analyzed 1 Million dev.to Articles (2022–2026): Here’s What the Data Reveals". &lt;em&gt;DEV Community&lt;/em&gt;, 2026. &lt;a href="https://dev.to/marina_eremina/i-analyzed-1-million-devto-articles-2022-2026-heres-what-the-data-reveals-44gm"&gt;https://dev.to/marina_eremina/i-analyzed-1-million-devto-articles-2022-2026-heres-what-the-data-reveals-44gm&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>webdev</category>
      <category>programming</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>finding my voice in tech: a wecoded journey</title>
      <dc:creator>Oni</dc:creator>
      <pubDate>Sat, 07 Mar 2026 16:02:45 +0000</pubDate>
      <link>https://dev.to/onirestart/finding-my-voice-in-tech-a-wecoded-journey-4olk</link>
      <guid>https://dev.to/onirestart/finding-my-voice-in-tech-a-wecoded-journey-4olk</guid>
      <description>&lt;p&gt;hey , lovely people. i wanted to share a little bit about my journey in tech, especially as we celebrate the wecoded 2026 challenge. it's a space that means a lot to me, a place where voices like ours can truly resonate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FSHQExuvOLrzHSROB.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FSHQExuvOLrzHSROB.png" alt="a peaceful workspace with a laptop and tea" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;i remember starting out, feeling a bit like a tiny fish in a very big ocean. the code felt daunting, the concepts immense. there were moments, many of them, where i wondered if i truly belonged. did i have what it takes to build something meaningful, to contribute to this ever-evolving digital world. i think many of us have felt that, haven't we. that little whisper of doubt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FhemWTQPorWfzCkUy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FhemWTQPorWfzCkUy.png" alt="a small orange fish in a vast blue ocean" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;the lecture halls often felt cold and intimidating, filled with voices that didn't always sound like mine. it was easy to feel invisible, to blend into the background and hope no one noticed my uncertainty.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FguMTxoHUBiFimEln.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FguMTxoHUBiFimEln.png" alt="an intimidating empty lecture hall" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;but i kept at it. i spent late nights with my keyboard, the soft glow of the screen my only companion. i typed and retyped, failing and learning, one line of code at a time. there was a quiet beauty in that struggle, a sense of persistence that i didn't know i had.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FNMvPGbwpOJmDPeHr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FNMvPGbwpOJmDPeHr.png" alt="hands typing on a glowing keyboard" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FqUHFHivwAlhfoLFR.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FqUHFHivwAlhfoLFR.gif" alt="a woman working on a laptop gif" width="500" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;then, something shifted. it wasn't a sudden, dramatic change, but a gradual unfolding. i started reaching out, tentatively at first, to other women in tech. i found communities, both online and offline, where experiences were shared, questions were welcomed, and encouragement flowed freely. it was like finding a hidden garden in the middle of a bustling city.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FtIMJwNNQxKjvRIeW.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FtIMJwNNQxKjvRIeW.png" alt="a lush hidden garden in the city" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;we sat in cozy cafes, laptops open, sharing stories and laughter. we realized that our challenges were common, and our strengths were collective. in those moments, the ocean didn't feel so big anymore.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FUXniDXIpjRyzakrw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FUXniDXIpjRyzakrw.png" alt="a diverse group of women in a cafe" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;i learned that my struggles weren't unique. that imposter syndrome, that feeling of not being quite good enough, it was a shared experience. and in that shared understanding, there was immense power. we talked about our wins, our setbacks, our dreams. we celebrated each other's small victories and offered a hand during the tough times.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FPxmPXpFMoIgzMMyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FPxmPXpFMoIgzMMyz.png" alt="two people sitting on a bench in support" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FZWLpDotzhhVSdbFt.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FZWLpDotzhhVSdbFt.gif" alt="your community can help gif" width="480" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;one of the most impactful things for me was finding mentors. these incredible individuals, often women who had walked similar paths, offered guidance, advice, and sometimes, just a listening ear. they helped me see my potential when i couldn't see it myself. they showed me that there wasn't just one way to succeed, but many. their wisdom was a light, guiding me through some of the darker, more confusing parts of my journey.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FDUgiZrNgDrwHTtlu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FDUgiZrNgDrwHTtlu.png" alt="a mentor and mentee looking at a laptop" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and that's what wecoded feels like to me. it's a beacon. it's a reminder that we're not alone. it's a platform to amplify those voices, to share those stories, and to inspire the next generation of coders, creators, and innovators. it's about building a more equitable and inclusive tech space, one conversation, one line of code, one shared experience at a time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FMHYvXgEpeOtJDntD.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FMHYvXgEpeOtJDntD.png" alt="a lighthouse beacon in the dark" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;i'm still learning, still growing, still navigating this amazing, sometimes challenging, world of tech. like a small sprout pushing through concrete, i've found that resilience is built in the quiet moments of persistence.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FulzWdsBrosGCtHfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FulzWdsBrosGCtHfj.png" alt="a small sprout growing through concrete" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;i do it with a stronger sense of self, a deeper connection to my community, and an unwavering belief in the power of collective strength. when we come together, we create something far more beautiful than we ever could alone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FDQrinCeGyTEBOlkG.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FDQrinCeGyTEBOlkG.png" alt="many hands coming together in a circle" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;for that, i am truly grateful. what about you. what has your journey been like. i'd love to hear your stories too.&lt;/p&gt;

&lt;p&gt;thank you for being part of this journey with me. let's keep building, keep sharing, and keep supporting each other.&lt;/p&gt;

&lt;p&gt;with warmth,&lt;br&gt;
your fellow coder&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/wecoded-2026"&gt;2026 WeCoded Challenge&lt;/a&gt;: Echoes of Experience&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FzLjSdlbeSkjQJMrb.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Ffiles.manuscdn.com%2Fuser_upload_by_module%2Fsession_file%2F111734191%2FzLjSdlbeSkjQJMrb.gif" alt="celebrate success gif" width="300" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>wecoded</category>
      <category>dei</category>
      <category>career</category>
    </item>
    <item>
      <title>Automate Me If You Can: The Accomplish Hackathon by WeMakeDevs</title>
      <dc:creator>Oni</dc:creator>
      <pubDate>Sat, 21 Feb 2026 20:38:54 +0000</pubDate>
      <link>https://dev.to/onirestart/automate-me-if-you-can-the-accomplish-hackathon-by-wemakedevs-2cei</link>
      <guid>https://dev.to/onirestart/automate-me-if-you-can-the-accomplish-hackathon-by-wemakedevs-2cei</guid>
      <description>&lt;p&gt;The WeMakeDevs community is running a fun and practical hackathon called &lt;strong&gt;Automate Me If You Can&lt;/strong&gt;, powered by Accomplish. If you like building useful tools or want to learn automation the right way, this is a great place to start.&lt;/p&gt;

&lt;p&gt;Here is the official page with all the details and the registration link:&lt;br&gt;
&lt;a href="https://www.wemakedevs.org/hackathons/accomplish" rel="noopener noreferrer"&gt;https://www.wemakedevs.org/hackathons/accomplish&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the Accomplish hackathon?
&lt;/h2&gt;

&lt;p&gt;This is an online hackathon that runs from &lt;strong&gt;16 Feb to 22 Feb&lt;/strong&gt;. The goal is simple: use Accomplish to automate a real task in your life, or contribute to the open source project. The better your automation, the higher your chances to win.&lt;/p&gt;

&lt;p&gt;Accomplish is an open source AI coworker that lives on your desktop. It can read files, browse the web, write documents, and manage small tasks for you. Every action is shown to you first, and it runs locally on your machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Accomplish?
&lt;/h2&gt;

&lt;p&gt;Accomplish is built for everyday work, not just demos. It can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Browse the web and fill forms&lt;/li&gt;
&lt;li&gt;Rename and organize files&lt;/li&gt;
&lt;li&gt;Generate and rewrite documents&lt;/li&gt;
&lt;li&gt;Scan folders and summarize contents&lt;/li&gt;
&lt;li&gt;Create repeatable workflows as skills&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is open source and runs locally, so you keep control of your data and approvals.&lt;/p&gt;

&lt;h2&gt;
  
  
  Two ways to win
&lt;/h2&gt;

&lt;p&gt;There are &lt;strong&gt;two tracks&lt;/strong&gt;, and you can join one or both:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Highlight track&lt;/strong&gt;: Show how you used Accomplish to automate something real. Record a short demo and submit it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open source track&lt;/strong&gt;: Pick an issue with the &lt;code&gt;feb_hackathon&lt;/code&gt; label and get your pull request merged.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;One person can win in both tracks, so it is worth trying both if you can.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prizes and perks
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;$3000 total cash&lt;/strong&gt;, 30 winners&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10 highlight winners&lt;/strong&gt; get &lt;strong&gt;$100 each&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Top 20 open source contributors&lt;/strong&gt; get &lt;strong&gt;$100 each&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Job interview opportunities&lt;/strong&gt; at Accomplish.ai&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Swag giveaway&lt;/strong&gt; for 10 lucky participants&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How to participate (simple steps)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Register&lt;/strong&gt; using the link on the official page.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pick a real problem&lt;/strong&gt; you face often (files, emails, reports, research, etc.).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build your automation&lt;/strong&gt; using Accomplish.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Record a short demo&lt;/strong&gt; (max 3 minutes) that shows before and after.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Submit your project&lt;/strong&gt; or open source PR.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Tips to make your entry stand out
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Choose a task that wastes real time every week.&lt;/li&gt;
&lt;li&gt;Keep your flow simple and clear.&lt;/li&gt;
&lt;li&gt;Show the before and after clearly in your demo.&lt;/li&gt;
&lt;li&gt;Use more than one Accomplish feature if possible.&lt;/li&gt;
&lt;li&gt;Focus on impact, not fancy visuals.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Ready to join?
&lt;/h2&gt;

&lt;p&gt;If you want to learn automation, build something useful, and possibly win cash and interviews, this hackathon is a strong opportunity.&lt;/p&gt;

&lt;p&gt;Check the details and register here:&lt;br&gt;
&lt;a href="https://www.wemakedevs.org/hackathons/accomplish" rel="noopener noreferrer"&gt;https://www.wemakedevs.org/hackathons/accomplish&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Good luck, and happy building!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>programming</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Writing Once, Shipping Everywhere: My Journey Building MediTrack Across 6 Platforms with Uno</title>
      <dc:creator>Oni</dc:creator>
      <pubDate>Sat, 29 Nov 2025 13:32:23 +0000</pubDate>
      <link>https://dev.to/onirestart/writing-once-shipping-everywhere-my-journey-building-meditrack-across-6-platforms-with-uno-4a8l</link>
      <guid>https://dev.to/onirestart/writing-once-shipping-everywhere-my-journey-building-meditrack-across-6-platforms-with-uno-4a8l</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/unoplatform"&gt;AI Challenge for Cross-Platform Apps&lt;/a&gt; - WOW Factor&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Healthcare App That Broke Platform Boundaries
&lt;/h2&gt;

&lt;p&gt;When I decided to build &lt;strong&gt;MediTrack&lt;/strong&gt; - a patient appointment and health record management app - I had a problem: healthcare workers use everything from iPhones to Windows desktops to Linux workstations. Building the same app six times was impossible. That's when Uno Platform changed the game.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;MediTrack&lt;/strong&gt; is a medical appointment and health record management system designed for small clinics and practitioners. Unlike generic medical software, MediTrack focuses on the user experience - clean interfaces, fast navigation, and intuitive scheduling.&lt;/p&gt;

&lt;p&gt;The visual design combines calming medical blues with practical information architecture. Patient records are presented as card-based interfaces, appointment calendars feature color-coded scheduling, and the overall aesthetic feels modern rather than clinical.&lt;/p&gt;

&lt;p&gt;What makes it special? &lt;strong&gt;It looks and feels native on every platform.&lt;/strong&gt; No "web wrapper" aesthetic. On iOS, it uses native gestures and navigation patterns. On Windows, it respects desktop workflows. On Linux, it integrates with the system seamlessly. Same code, completely different UX experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live Demo&lt;/strong&gt;: &lt;a href="https://meditrack-uno.netlify.app" rel="noopener noreferrer"&gt;meditrack-uno.netlify.app&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Repository&lt;/strong&gt;: &lt;a href="https://github.com/aniruddha-adak/MediTrack-Uno" rel="noopener noreferrer"&gt;github.com/aniruddha-adak/MediTrack-Uno&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Running on All 6 Platforms:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;iOS&lt;/strong&gt;: Native simulator with full gesture support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Android&lt;/strong&gt;: Material Design adaptation with scroll behaviors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Windows&lt;/strong&gt;: Desktop-optimized with keyboard navigation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;macOS&lt;/strong&gt;: Native Mac look and feel with Command key support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Linux&lt;/strong&gt;: GTK native rendering with light/dark theme support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Web&lt;/strong&gt;: Responsive design optimized for smaller screens&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Test Account&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Email: &lt;a href="mailto:clinic@meditrack.demo"&gt;clinic@meditrack.demo&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Password: Demo2025!&lt;/li&gt;
&lt;li&gt;Pre-loaded with sample patient data&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cross-Platform Magic
&lt;/h2&gt;

&lt;p&gt;MediTrack runs identically on &lt;strong&gt;all 6 platforms&lt;/strong&gt; from a single codebase. But "identical" doesn't mean "same" - it means &lt;strong&gt;native on every platform&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  How the Single Codebase Approach Transformed This Project
&lt;/h3&gt;

&lt;p&gt;Before Uno, I was prepared to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Write iOS code (Swift)&lt;/li&gt;
&lt;li&gt;Write Android code (Kotlin)&lt;/li&gt;
&lt;li&gt;Write Windows code (C#/UWP)&lt;/li&gt;
&lt;li&gt;Write macOS code (Swift)&lt;/li&gt;
&lt;li&gt;Write Linux code (C++)&lt;/li&gt;
&lt;li&gt;Write Web code (JavaScript/React)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's 6 completely different tech stacks. 6 UI frameworks. 6 deployment pipelines.&lt;/p&gt;

&lt;p&gt;Instead, I wrote &lt;strong&gt;XAML and C# once&lt;/strong&gt; and deployed everywhere.&lt;/p&gt;

&lt;p&gt;The breakthrough moments:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Gesture Recognition&lt;/strong&gt; - MediTrack uses swipe gestures for appointment filtering. One gesture handler worked on mobile, while desktop trackpads automatically got the same behavior mapped to mouse events&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Navigation Patterns&lt;/strong&gt; - iOS uses bottom tab navigation, Windows uses left sidebar, Web uses responsive hamburger menu. All from the &lt;strong&gt;same XAML template&lt;/strong&gt; with platform-specific selectors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Binding&lt;/strong&gt; - Real-time updates to patient records sync across all platforms instantly through a shared ViewModel&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Platform-Specific Optimizations&lt;/strong&gt; - Calendar control renders beautifully different on touch devices vs. desktop due to Uno's adaptive rendering&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Interactive Features That Wow Users
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Visual Polish
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Appointment Color Coding&lt;/strong&gt;: Consultations are blue, follow-ups are green, urgent appointments are red - intuitive at a glance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smooth Transitions&lt;/strong&gt;: Navigating between patient records triggers elegant slide animations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Responsive Lists&lt;/strong&gt;: Patient lists update in real-time with smooth item addition/removal animations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interactive Calendar&lt;/strong&gt;: Drag to reschedule appointments, tap to view details - different interaction models per platform&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Functional Excellence
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Voice-to-Text Notes&lt;/strong&gt;: Doctors can record voice notes (especially useful on iOS/Android) that sync to all platforms&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Offline Mode&lt;/strong&gt;: Can view and annotate patient records offline, syncs when connectivity returns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Doctor Collaboration&lt;/strong&gt;: Clinics with multiple practitioners see live updates when colleagues add notes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart Scheduling&lt;/strong&gt;: The app suggests appointment slots based on provider availability across all platforms&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Healthcare-Specific UX
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;HIPAA-Compliant&lt;/strong&gt;: All data encrypted in transit and at rest&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Session Timeout&lt;/strong&gt;: Patient records lock after 10 minutes for security&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audit Trail&lt;/strong&gt;: Every access to patient data is logged for compliance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Two-Factor Authentication&lt;/strong&gt;: Available across all platforms&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Wow Factor
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;What makes MediTrack stand out is credibility through consistency.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When a doctor uses MediTrack on their iPhone to check patient history, then switches to their Windows desktop to write prescriptions, then accesses from a Linux machine in a telehealth consultation - the experience is &lt;strong&gt;frictionless&lt;/strong&gt; because the app adapts to each platform's paradigms without feeling compromised.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Moments That Make People Stop and Notice:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Platform Adaptation&lt;/strong&gt;: Show someone the same app running on iOS, Android, and Windows side-by-side. They immediately see they're not looking at a "cross-platform app" - they're looking at 3 separate native apps that share a codebase&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Sync&lt;/strong&gt;: Update a patient's allergy information on one platform, instantly see it reflected across all others. The data consistency is perfect&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Performance&lt;/strong&gt;: Healthcare workers are impatient. MediTrack opens instantly on all platforms - no loading screens, no stuttering. It feels native because it IS native&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Gesture Consistency&lt;/strong&gt;: Swipe to delete works the same way on mobile as it does (mapped to hover+delete) on desktop. Muscle memory transfers across platforms&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Accessibility&lt;/strong&gt;: Built-in accessibility features work seamlessly across all platforms - screen readers, voice control, high contrast modes - because they're implemented at the framework level, not individually per platform&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Why Healthcare Needs This
&lt;/h2&gt;

&lt;p&gt;Traditional healthcare software is often monolithic Windows-only applications from 1995. Clinics gradually add iPad stations, then need an Android app, then need cloud access... and suddenly they're maintaining 4 different systems.&lt;/p&gt;

&lt;p&gt;MediTrack proves you can build modern healthcare software that's beautiful, responsive, and works everywhere - &lt;strong&gt;without maintaining platform-specific codebases&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Experience Building This Project
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Challenges
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Domain Complexity&lt;/strong&gt;: Medical workflows are intricate - appointments, records, prescriptions, billing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance Overhead&lt;/strong&gt;: HIPAA requirements weren't trivial to implement across all platforms&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data Sync&lt;/strong&gt;: Ensuring consistency across all platforms while offline requires sophisticated sync logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Platform-Specific Quirks&lt;/strong&gt;: iOS handles file dialogs differently than Windows; Uno smooths these differences but understanding them was crucial&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  What Went Smoothly
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;UI Reuse&lt;/strong&gt;: 95% of the UI code shared across all platforms&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business Logic&lt;/strong&gt;: All appointment scheduling, patient search, and data validation shared completely&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment&lt;/strong&gt;: One CI/CD pipeline builds and deploys to all 6 targets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Testing&lt;/strong&gt;: Unit tests cover business logic once; platform integration tests are minimal&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Technical Highlights
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="c1"&gt;// One data model for all platforms&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Patient&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;Id&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;Name&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Appointment&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Appointments&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;set&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// One ViewModel for all platforms&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;PatientViewModel&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;BindableBase&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;Patient&lt;/span&gt; &lt;span class="n"&gt;_selectedPatient&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;Patient&lt;/span&gt; &lt;span class="n"&gt;SelectedPatient&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;get&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;_selectedPatient&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;set&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;SetProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;ref&lt;/span&gt; &lt;span class="n"&gt;_selectedPatient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;value&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// XAML UI works on all platforms (with platform-specific selectors when needed)&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;DataGrid&lt;/span&gt; &lt;span class="n"&gt;ItemsSource&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"{Binding Patients}"&lt;/span&gt; 
          &lt;span class="n"&gt;SelectedItem&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"{Binding SelectedPatient}"&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;!--&lt;/span&gt; &lt;span class="n"&gt;Uses&lt;/span&gt; &lt;span class="n"&gt;native&lt;/span&gt; &lt;span class="n"&gt;DataGrid&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;Windows&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="n"&gt;macOS&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="n"&gt;Linux&lt;/span&gt; &lt;span class="p"&gt;--&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;&amp;lt;!--&lt;/span&gt; &lt;span class="n"&gt;Uses&lt;/span&gt; &lt;span class="n"&gt;ListView&lt;/span&gt; &lt;span class="k"&gt;on&lt;/span&gt; &lt;span class="n"&gt;iOS&lt;/span&gt;&lt;span class="p"&gt;/&lt;/span&gt;&lt;span class="n"&gt;Android&lt;/span&gt; &lt;span class="p"&gt;--&amp;gt;&lt;/span&gt;
&lt;span class="p"&gt;&amp;lt;/&lt;/span&gt;&lt;span class="n"&gt;DataGrid&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What I'm Building Next Based on This Experience
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Telehealth Integration&lt;/strong&gt;: Video consultation features working seamlessly across platforms&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mobile-First Patient Portal&lt;/strong&gt;: Patients can access their records through MediTrack on any device&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI-Powered Note Summarization&lt;/strong&gt;: Summarize doctor's notes using the same logic across all platforms&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integration with EMR Systems&lt;/strong&gt;: Connect to major hospital record systems&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Real Achievement
&lt;/h2&gt;

&lt;p&gt;The achievement isn't the app itself. It's proving that &lt;strong&gt;cross-platform development doesn't mean compromising on native quality&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;With Uno Platform, I didn't build "a cross-platform app." I built 6 native apps that happen to share a codebase.&lt;/p&gt;

&lt;p&gt;That's not an engineering compromise. That's a competitive advantage.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;For developers considering cross-platform development: Uno Platform makes you question why you ever considered building separate apps. Once you experience building once and shipping everywhere, going back to platform-specific development feels archaic.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Especially in healthcare, where consistency and reliability matter, building with Uno feels like the responsible choice.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Special Thanks&lt;/strong&gt;: To the Uno Platform team for creating a framework that respects the uniqueness of each platform while enabling true code sharing. That balance is rare and precious.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>unoplatformchallenge</category>
      <category>dotnet</category>
      <category>crossplatform</category>
    </item>
    <item>
      <title>Debugging AI Agents: Lessons from Week 1 That Changed How I Think About Autonomous Systems</title>
      <dc:creator>Oni</dc:creator>
      <pubDate>Sat, 29 Nov 2025 13:21:53 +0000</pubDate>
      <link>https://dev.to/onirestart/debugging-ai-agents-lessons-from-week-1-that-changed-how-i-think-about-autonomous-systems-4f99</link>
      <guid>https://dev.to/onirestart/debugging-ai-agents-lessons-from-week-1-that-changed-how-i-think-about-autonomous-systems-4f99</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-kaggle-ai-agents-2025-11-10"&gt;Google AI Agents Writing Challenge&lt;/a&gt;: Learning Reflections&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The First Bug That Haunted Me (And Taught Me Everything)
&lt;/h2&gt;

&lt;p&gt;Day 2 of the AI Agents Intensive Course. I was confident. I'd built ML models before, dabbled with transformers, even deployed a few AI projects. Then I hit my first major debugging session, and I realized I had &lt;strong&gt;no idea what I was doing&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The agent kept looping. Same thought process. Same action. Same result. Over and over. It wasn't crashing - which made it worse. It was &lt;strong&gt;stuck in a reasoning loop&lt;/strong&gt;, unable to break free.&lt;/p&gt;

&lt;p&gt;That's when I learned the first real lesson: &lt;strong&gt;debugging AI agents is fundamentally different from debugging code.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Traditional Debugging Breaks Down
&lt;/h2&gt;

&lt;p&gt;With traditional code, I can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set breakpoints&lt;/li&gt;
&lt;li&gt;Inspect variables&lt;/li&gt;
&lt;li&gt;Trace execution paths&lt;/li&gt;
&lt;li&gt;Reproduce bugs reliably&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With agents, it's chaos:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The same prompt produces different outputs&lt;/li&gt;
&lt;li&gt;Reasoning depends on LLM temperature, context, and probability distributions&lt;/li&gt;
&lt;li&gt;The "execution path" isn't deterministic&lt;/li&gt;
&lt;li&gt;Reproduction requires capturing the exact state, which is nearly impossible&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The course showed me that &lt;strong&gt;agent debugging is about understanding reasoning patterns&lt;/strong&gt;, not line-by-line execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three Debugging Patterns That Saved My Projects
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Thought Tracing (The ReAct Lifesaver)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I was building a task decomposition agent that kept generating vague sub-tasks. The agent could "think" fine but couldn't break down problems meaningfully.&lt;/p&gt;

&lt;p&gt;The breakthrough: I added explicit thought logging and examined the actual reasoning steps:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Thought: "I need to analyze the user's request"
Action: read_documentation
Observation: [entire documentation]
Thought: "Now I understand"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The problem was obvious once I saw it: the agent was reading TOO MUCH context. It was drowning in information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix&lt;/strong&gt;: Structured prompts with explicit reasoning checkpoints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Step 1: IDENTIFY the core problem in 1 sentence
Step 2: LIST the sub-tasks needed (max 5)
Step 3: ASSIGN priority to each
Step 4: EXECUTE in order
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This simple structure reduced looping by 80% and made reasoning transparent.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Tool Instrumentation (The Observation Lens)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;My weather-forecasting agent kept making terrible predictions. The reasoning seemed sound, but the outputs were nonsensical.&lt;/p&gt;

&lt;p&gt;I instrumented my tools to log what the agent was &lt;em&gt;actually&lt;/em&gt; observing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;weather_api_call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;location&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;fetch_weather&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;location&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;log_observation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent received: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Turns out the agent was receiving incomplete JSON responses. It was reasoning perfectly based on garbage data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson&lt;/strong&gt;: The agent isn't broken. The &lt;strong&gt;tool integration&lt;/strong&gt; is. Agents are only as good as their tools provide observations.&lt;/p&gt;

&lt;p&gt;This realization changed how I architect agent systems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Structured tool outputs (JSON schema validation)&lt;/li&gt;
&lt;li&gt;Verbose observations with context&lt;/li&gt;
&lt;li&gt;Tool-specific error handling&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Prompt Archaeology (The Iterative Refinement)&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;My MindCareAI assessment agent kept recommending interventions that weren't appropriate. I was about to blame the model when I realized:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I had never explicitly told the agent when to say "I don't know."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I'd given it 50 rules about what TO do, but zero guidance on what NOT to do.&lt;/p&gt;

&lt;p&gt;I added explicit boundaries to the system prompt:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;You are a mental health assessment advisor.
ALWAYS respect these limits:
- Never diagnose clinical disorders (that's for professionals)
- Flag high-risk indicators for immediate professional referral
- Acknowledge uncertainty: "Based on available information..."
- Suggest professional help when unsure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Suddenly the agent became more trustworthy, not because it was smarter, but because it &lt;strong&gt;understood its boundaries&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Debugging Mindset Shift
&lt;/h2&gt;

&lt;p&gt;Before the course, debugging was about finding bugs.&lt;/p&gt;

&lt;p&gt;Now, I understand debugging agents is about:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Understanding what the agent observes&lt;/strong&gt; (tool outputs, context)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validating the reasoning steps&lt;/strong&gt; (thought traces)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verifying alignment with intent&lt;/strong&gt; (does the behavior match goals?)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Setting clear boundaries&lt;/strong&gt; (what should the agent NOT do?)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Practical Debugging Tools I Now Use
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Tool #1: Verbose Logging&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Every agent interaction gets logged with timestamps, reasoning steps, tool calls, and observations. I can replay agent behavior and understand exactly where it went wrong.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Tool #2: Prompt Versioning&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Like code versioning, I maintain versions of system prompts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;v1.0 - Initial prompt (too vague, agent looped)
v1.1 - Added ReAct structure (80% better)
v1.2 - Added tool-specific instructions (fixed bad observations)
v2.0 - Added boundary conditions (fixed unsafe recommendations)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Tool #3: Test Cases for Reasoning&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I test agents like I test code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Edge cases: "What if the tool fails?"&lt;/li&gt;
&lt;li&gt;Ambiguous inputs: "What if the user request is vague?"&lt;/li&gt;
&lt;li&gt;Boundary conditions: "What if data is missing?"&lt;/li&gt;
&lt;li&gt;Adversarial inputs: "What if the user asks something the agent shouldn't do?"&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Tool #4: Monitoring Agent Health&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I track metrics that matter for agents:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Loop detection (same reasoning repeated?)&lt;/li&gt;
&lt;li&gt;Tool success rate (are tools returning useful data?)&lt;/li&gt;
&lt;li&gt;Action diversity (is the agent trying different approaches?)&lt;/li&gt;
&lt;li&gt;Decision quality (are recommendations reasonable?)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Mindset: Agents as Living Systems
&lt;/h2&gt;

&lt;p&gt;The biggest mindset shift happened when I stopped thinking of agents as &lt;strong&gt;deterministic programs&lt;/strong&gt; and started thinking of them as &lt;strong&gt;learning interpreters&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;They don't execute instructions. They &lt;strong&gt;reason about problems and decide actions&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Bugs aren't always reproducible&lt;/li&gt;
&lt;li&gt;Improvements come from better prompts, not code patches&lt;/li&gt;
&lt;li&gt;Safety requires constraints, not features&lt;/li&gt;
&lt;li&gt;Understanding is more valuable than fixing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I'm Applying to MindCareAI's Next Version
&lt;/h2&gt;

&lt;p&gt;The assessment agent needed a complete rethink based on these lessons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Explicit Reasoning Checkpoints&lt;/strong&gt;: Users see HOW the agent reached conclusions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool Integration&lt;/strong&gt;: Each diagnostic question is logged so I can see what data the agent actually observes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Boundary Conditions&lt;/strong&gt;: Clear rules about when to escalate to professionals&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test Suite&lt;/strong&gt;: Edge cases for mental health scenarios (risk indicators, ambiguous symptoms, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring&lt;/strong&gt;: Real-time dashboards showing agent reasoning quality&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Real Takeaway
&lt;/h2&gt;

&lt;p&gt;Debugging AI agents taught me that &lt;strong&gt;human understanding of the reasoning process is more valuable than code correctness&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;A bug-free agent that doesn't explain itself is useless.&lt;br&gt;
A slightly imperfect agent with transparent reasoning is trustworthy.&lt;/p&gt;

&lt;p&gt;This shift - from fixing code to understanding reasoning - is the bridge between building AI systems and building &lt;strong&gt;trustworthy&lt;/strong&gt; AI systems.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The future of AI engineering isn't about building smarter agents. It's about building agents we can understand, verify, and trust.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;That requires a completely different approach to debugging.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;For fellow students building AI agents&lt;/strong&gt;: &lt;br&gt;
When your agent breaks, don't immediately start coding. First, understand what it's observing. Then validate its reasoning. Then set boundaries. The bug was probably there all along - you just needed to look at it from the agent's perspective.&lt;/p&gt;

&lt;p&gt;That's the debugging mindset that separates experimental AI from production-ready systems.&lt;/p&gt;

</description>
      <category>googleaichallenge</category>
      <category>ai</category>
      <category>agents</category>
      <category>devchallenge</category>
    </item>
    <item>
      <title>From Zero to Agentic: My AI Agents Intensive Course Journey - Building the Future of AI Systems</title>
      <dc:creator>Oni</dc:creator>
      <pubDate>Sat, 29 Nov 2025 13:11:36 +0000</pubDate>
      <link>https://dev.to/onirestart/from-zero-to-agentic-my-ai-agents-intensive-course-journey-building-the-future-of-ai-systems-4pb8</link>
      <guid>https://dev.to/onirestart/from-zero-to-agentic-my-ai-agents-intensive-course-journey-building-the-future-of-ai-systems-4pb8</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/google-kaggle-ai-agents-2025-11-10"&gt;Google AI Agents Writing Challenge&lt;/a&gt;: Learning Reflections&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Moment Everything Clicked
&lt;/h2&gt;

&lt;p&gt;I entered the 5-Day AI Agents Intensive Course as a web developer who built React applications and the occasional machine learning experiment. I was curious about AI agents but honestly couldn't envision how they'd fundamentally change how I approach building systems.&lt;/p&gt;

&lt;p&gt;I left as an engineer who now thinks in terms of &lt;strong&gt;autonomous decision-making, hierarchical reasoning, and emergent behavior&lt;/strong&gt;. This course fundamentally rewired how I think about software architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways That Resonate
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Agents Aren't Just Smarter Chatbots&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Coming in, I conflated AI agents with large language models having conversations. The course clarified the distinction: agents are &lt;strong&gt;decision-making systems that perceive their environment, reason about actions, and execute plans to achieve goals&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This distinction was profound. A chatbot responds to queries. An agent continuously monitors its environment and takes autonomous actions.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Multi-Agent Systems Are the Real Power&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The capstone labs on multi-agent architectures blew my mind. Watching specialized agents coordinate to solve complex problems - market simulators with buyer/seller agents, code generation with reviewer agents, customer service with escalation agents - showed me that the future of AI isn't monolithic models but &lt;strong&gt;orchestrated agent networks&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I'm already redesigning MindCareAI's architecture with this lens: specialized agents for assessment processing, recommendation generation, and user engagement.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Reasoning and Planning Are Learnable Skills&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I assumed reasoning was some black-box magic in LLMs. The course demonstrated that &lt;strong&gt;agentic reasoning follows learnable patterns&lt;/strong&gt;: breaking complex problems into sub-goals, maintaining working memory, iterating on solutions.&lt;/p&gt;

&lt;p&gt;The Chain-of-Thought and Tree-of-Thought techniques revealed that better reasoning isn't about bigger models - it's about structured thinking patterns. This was liberating because it means I can build intelligent agents without access to GPT-4.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Tool Integration Is Everything&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;An agent without tools is just a text generator. The labs on tool calling, API integration, and knowledge retrieval showed the real magic: agents become powerful when they can &lt;strong&gt;perceive beyond their training data and execute actions in the real world&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;For my work building AI-powered applications, this means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Agents can query real databases, not just their training knowledge&lt;/li&gt;
&lt;li&gt;Agents can trigger actual workflows, not just suggest actions&lt;/li&gt;
&lt;li&gt;Agents can access real-time information and respond adaptively&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. &lt;strong&gt;The Role of Humans Transforms&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The course repeatedly emphasized that agents augment human decision-making rather than replace it. The most powerful systems have &lt;strong&gt;clear human-in-the-loop checkpoints&lt;/strong&gt; where agents propose actions and humans approve or refine them.&lt;/p&gt;

&lt;p&gt;This completely changed how I think about automation ethics and responsibility in AI systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  How My Understanding Evolved
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Before the Course:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;"AI agents are advanced chatbots"&lt;/li&gt;
&lt;li&gt;"I need GPT-4 to build intelligent systems"&lt;/li&gt;
&lt;li&gt;"Reasoning happens inside the model"&lt;/li&gt;
&lt;li&gt;"Automating a process means removing the human entirely"&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  After the Course:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;"AI agents are decision-making systems that plan, reason, and act"&lt;/li&gt;
&lt;li&gt;"I can build effective agents with smaller models and good system design"&lt;/li&gt;
&lt;li&gt;"Reasoning emerges from structured thinking patterns and tool use"&lt;/li&gt;
&lt;li&gt;"The best AI systems have intentional human collaboration points"&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Hands-On Insights I'll Never Forget
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;ReAct Pattern Lab&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Implementing the Reasoning + Acting pattern showed me that structured prompting can be more powerful than fine-tuning. The agent that explicitly "thought" before "acting" massively outperformed the end-to-end baseline.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Tool Calling in Practice&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Building an agent that could call Python functions, SQL queries, and APIs simultaneously taught me about integration complexity. Error handling and fallback strategies became central to agentic design.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;The Multi-Agent Orchestration&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The final capstone where I coordinated multiple specialized agents taught me that system design matters as much as individual agent design. How agents communicate, pass context, and handle conflicts became the actual bottleneck.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Prompt Engineering for Agents&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Prompts for agents are fundamentally different from prompts for chatbots. Agents need &lt;strong&gt;clear role definition, explicit thinking space, tool availability information, and success criteria&lt;/strong&gt;. Vague prompts break agent planning.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm Building Next
&lt;/h2&gt;

&lt;p&gt;These insights directly influence my next projects:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MindCareAI Redesign&lt;/strong&gt;: Multi-agent architecture with specialized agents for assessment, recommendation, and follow-up&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Autonomous Code Reviewer&lt;/strong&gt;: Agents that understand code intent, identify issues, and suggest improvements (beyond simple linting)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intelligent Data Pipeline&lt;/strong&gt;: Agents that monitor data quality, detect anomalies, and automatically trigger remediation workflows&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Real Transformation
&lt;/h2&gt;

&lt;p&gt;If I could summarize the course in one sentence: &lt;strong&gt;The Intensive Course taught me that intelligence isn't just computation - it's perception, reasoning, action, and iteration working in concert.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I came for the technical fundamentals. I left with a new mental model for how to architect autonomous systems. The practical labs grounded theory in reality, and the community discussions sparked creative ideas about how to apply agentic patterns to problems I haven't even encountered yet.&lt;/p&gt;

&lt;p&gt;For anyone on the fence about taking this course: if you build software and want to understand the future of intelligent systems, this is essential. You won't just learn about AI agents - you'll learn to think like an agent architect.&lt;/p&gt;

&lt;p&gt;The future is agentic. And now I know how to build it.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Special Thanks&lt;/strong&gt;: To the Google and Kaggle teams for an extraordinarily well-designed course, and to my cohort members who pushed me to think deeper about these concepts. The community Discord was invaluable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resources That Helped&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.kaggle.com/learn-guide/5-day-agents" rel="noopener noreferrer"&gt;Kaggle Learn Guide: 5-Day Agents&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://platform.uno/" rel="noopener noreferrer"&gt;Google AI's Agent Architecture Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Course Discord Community&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>googleaichallenge</category>
      <category>ai</category>
      <category>agents</category>
      <category>devchallenge</category>
    </item>
  </channel>
</rss>
