<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yu-Wei Simon Liu (Simon Liu)</title>
    <description>The latest articles on DEV Community by Yu-Wei Simon Liu (Simon Liu) (@yuwei_simonliusimonl).</description>
    <link>https://dev.to/yuwei_simonliusimonl</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yuwei_simonliusimonl"/>
    <language>en</language>
    <item>
      <title>[Open Source] ADEval — A Tool for Evaluating Tool-Use Capabilities of Google ADK AI Agents</title>
      <dc:creator>Yu-Wei Simon Liu (Simon Liu)</dc:creator>
      <pubDate>Tue, 10 Mar 2026 04:58:31 +0000</pubDate>
      <link>https://dev.to/gde/open-source-adeval-a-tool-for-evaluating-tool-use-capabilities-of-google-adk-ai-agents-3284</link>
      <guid>https://dev.to/gde/open-source-adeval-a-tool-for-evaluating-tool-use-capabilities-of-google-adk-ai-agents-3284</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ypmonkknchaankgl1rg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ypmonkknchaankgl1rg.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  I. Introduction: Why Building a "Stable" AI Agent is Hard
&lt;/h3&gt;

&lt;p&gt;As developers, we all know that scaffolding an AI Agent using Google’s &lt;strong&gt;Agent Development Kit (ADK)&lt;/strong&gt; or various LLM frameworks is relatively straightforward. The real challenge, however, lies in &lt;strong&gt;ensuring the Agent's behavior is predictable and stable.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In real-world business scenarios, an Agent might call the right tool today but deviate tomorrow due to slight prompt variations, model updates, or context interference. Relying solely on manual chat testing is inefficient and fails to cover critical edge cases.&lt;/p&gt;

&lt;p&gt;This is why I built &lt;strong&gt;ADEval&lt;/strong&gt; — a systematic evaluation tool designed specifically for AI Agents. It empowers developers to gain deep control over Agent behavior through a dual-track approach: &lt;strong&gt;Automation&lt;/strong&gt; and &lt;strong&gt;Visualization&lt;/strong&gt;.&lt;/p&gt;




&lt;h3&gt;
  
  
  II. Github: ADEval
&lt;/h3&gt;

&lt;p&gt;ADEval provides an intuitive Web UI and a powerful CLI, allowing you to systematically test your Agent's &lt;strong&gt;Question-Tools-Answer (Q-Tools-A)&lt;/strong&gt; flow. It supports experiment management, batch testing, and comprehensive tracing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdoeem3sm1tbgek1qqwom.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdoeem3sm1tbgek1qqwom.png" alt=" " width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Github Repository&lt;/strong&gt;
&lt;a href="https://github.com/ap-mic-inc/ADEval" rel="noopener noreferrer"&gt;GitHub - ap-mic-inc/ADEval: Google ADK Evaluation Service&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Project Documentation&lt;/strong&gt;
&lt;a href="https://github.com/ap-mic-inc/ADEval/tree/main/docs" rel="noopener noreferrer"&gt;ADEval Documentation&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  III. Core Philosophy: The Q-Tools-A Validation Framework
&lt;/h3&gt;

&lt;p&gt;When evaluating an AI Agent, comparing the final text response (Answer) is simply not enough. A high-quality Agent must call the right &lt;strong&gt;Tools&lt;/strong&gt; at the right time with the correct parameters.&lt;/p&gt;

&lt;p&gt;ADEval is designed around the &lt;strong&gt;Q-Tools-A&lt;/strong&gt; logic:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Question&lt;/strong&gt;: The input prompt, specific User ID, and necessary Session State (persistence).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Tools&lt;/strong&gt;: Automatically validates if the Agent invoked the expected tools. We support &lt;strong&gt;Smart Argument Comparison&lt;/strong&gt; — even if the JSON parameter order differs, ADEval marks it as a match if the values and logic are consistent.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Answer&lt;/strong&gt;: Ensures the final response meets business requirements via keyword matching or semantic checks.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below is the workflow logic diagram I designed for ADEval, ensuring a clear path from start to output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnitf4ttevvyw6od0c0vz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnitf4ttevvyw6od0c0vz.png" alt=" " width="800" height="1772"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ADEval Workflow Logic Diagram (Mermaid Chart)&lt;/p&gt;




&lt;h3&gt;
  
  
  IV. Dual Mode: From Debugging to Production Automation
&lt;/h3&gt;

&lt;p&gt;ADEval provides both an intuitive Web UI and a powerful CLI, allowing you to switch seamlessly between manual debugging and CI/CD automation.&lt;/p&gt;

&lt;h4&gt;
  
  
  1. Web UI: Visual Tracing &amp;amp; Real-time Debugging
&lt;/h4&gt;

&lt;p&gt;Observing an Agent's thought process is critical. The ADEval Web dashboard features a powerful &lt;strong&gt;"Playground"&lt;/strong&gt; where you can input questions and observe results in real-time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz1s5t09czmh8921x36y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffz1s5t09czmh8921x36y.png" alt=" " width="800" height="502"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ADEval Playground Interface showing testing panels&lt;/p&gt;

&lt;p&gt;The standout feature is &lt;strong&gt;Visual Tracing&lt;/strong&gt;. We transform complex API response event streams into a "Dark Terminal Style" viewer. You can expand raw JSON with one click to pinpoint exactly where a tool call failed or deviated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3riq3sjwe754fd9lewef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3riq3sjwe754fd9lewef.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Visual Tracing Dark Terminal View showing JSON events&lt;/p&gt;

&lt;h4&gt;
  
  
  2. CLI Tool: Powerhouse for CI/CD &amp;amp; Batch Execution
&lt;/h4&gt;

&lt;p&gt;Once your experiments are defined, you no longer need a browser. ADEval offers a full-fledged command-line tool perfect for automation scripts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Global Configuration (&lt;code&gt;adeval config&lt;/code&gt;)&lt;/strong&gt;: Set default API URLs and developer credentials to save time.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Quick Test (&lt;code&gt;adeval test&lt;/code&gt;)&lt;/strong&gt;: Perform stress tests or logic validation directly against the Agent without creating an experiment.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Batch Execution &amp;amp; Reporting (&lt;code&gt;adeval run / export&lt;/code&gt;)&lt;/strong&gt;: Execute entire experiment sets and receive precise statistical reports.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3im6qzsfhk9m5ybwc5kb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3im6qzsfhk9m5ybwc5kb.png" alt=" " width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CLI output showing 'adeval run' statistical table&lt;/p&gt;




&lt;h3&gt;
  
  
  V. Deep Dive into Features
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Experiment Management &amp;amp; Batch Evaluation&lt;/strong&gt;: Import dozens or hundreds of test cases via &lt;strong&gt;CSV files&lt;/strong&gt;. ADEval provides real-time progress bars and pass-rate statistics during execution.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Local Data Ownership&lt;/strong&gt;: Privacy and performance are priorities. All experiment data, logs, and configs are stored locally in the &lt;code&gt;.adeval/&lt;/code&gt; folder. Your data stays on your machine.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Smart Comparison Logic&lt;/strong&gt;: In tool validation, we support &lt;strong&gt;"Order-Independent"&lt;/strong&gt; comparison. If an Agent calls &lt;code&gt;get_weather(city="Taipei", unit="c")&lt;/code&gt;, ADEval accurately judges it even if the internal parameter sequence differs.&lt;/li&gt;
&lt;/ol&gt;




&lt;h3&gt;
  
  
  VI. Getting Started
&lt;/h3&gt;

&lt;p&gt;ADEval is open-source and ready to use. You can install it easily via Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone the repository&lt;/span&gt;
git clone https://github.com/ap-mic-inc/ADEval.git
&lt;span class="nb"&gt;cd &lt;/span&gt;ADEval

&lt;span class="c"&gt;# Install in editable mode&lt;/span&gt;
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After installation, simply run &lt;code&gt;adeval ui&lt;/code&gt; to launch the web interface or check out &lt;code&gt;adeval --help&lt;/code&gt; for the full CLI suite.&lt;/p&gt;




&lt;h3&gt;
  
  
  VII. Conclusion
&lt;/h3&gt;

&lt;p&gt;Building an AI Agent that can chat naturally is just the beginning. Ensuring its stability and predictability in complex business scenarios is where the real engineering challenge lies. ADEval was born to solve the core pain points of testing, debugging, and maintaining Agent systems.&lt;/p&gt;

&lt;p&gt;In summary, ADEval brings three key values to developers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Precise Behavioral Control&lt;/strong&gt;: Move beyond text matching to rigorous Tool-use validation.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Flexible Workflow&lt;/strong&gt;: Covers the entire lifecycle from visual debugging to automated regression testing.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Security &amp;amp; Efficiency&lt;/strong&gt;: Localized storage for privacy and smart matching to reduce false positives.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you are building high-quality enterprise AI applications with Google ADK, ADEval is your indispensable testing companion. Give us a &lt;strong&gt;Star (⭐)&lt;/strong&gt; on GitHub or submit an Issue/PR to help us build a stronger AI evaluation ecosystem!&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;I am Simon&lt;/strong&gt;&lt;br&gt;
Hi everyone, I am Simon Liu, an AI Solutions Expert and Google Developer Expert (GDE) in GenAI. I am dedicated to helping enterprises implement AI technologies to solve real-world problems. If this post was helpful, please follow me on Medium or connect with me on LinkedIn.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Personal Website:&lt;/strong&gt; &lt;a href="https://simonliuyuwei.my.canva.site/link-in-bio" rel="noopener noreferrer"&gt;https://simonliuyuwei.my.canva.site/link-in-bio&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>showdev</category>
      <category>testing</category>
    </item>
    <item>
      <title>[ Open Source Project] open-translate — Offline Translation Web Service Powered by TranslateGemma</title>
      <dc:creator>Yu-Wei Simon Liu (Simon Liu)</dc:creator>
      <pubDate>Sun, 18 Jan 2026 16:26:31 +0000</pubDate>
      <link>https://dev.to/gde/open-source-open-translate-offline-translation-web-service-powered-by-translategemma-2ob2</link>
      <guid>https://dev.to/gde/open-source-open-translate-offline-translation-web-service-powered-by-translategemma-2ob2</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;I’ve provided a Google Colab test script for this project. You can apply for Hugging Face and ngrok tokens to test it. Welcome to use it!&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkl5mgrtbwyjh7p1p78gt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkl5mgrtbwyjh7p1p78gt.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;On January 15, 2026, Google released a new model named TranslateGemma. It can perform translation tasks for 55 languages, including Traditional Chinese and English, based on its specific training. It even supports image input for translation output.&lt;/p&gt;

&lt;p&gt;I wondered if it was possible to create an offline "Google Translate-like" web service so that corporate confidential data could be processed in a non-networked environment. Thus, this project was born. This article will introduce TranslateGemma and my personal project.&lt;/p&gt;

&lt;h2&gt;
  
  
  I. Introduction to TranslateGemma
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Background &amp;amp; Overview
&lt;/h3&gt;

&lt;p&gt;TranslateGemma is a specialized LLM for translation tasks developed by Google DeepMind, built on the Gemma 3 architecture. It aims to provide the strongest translation capabilities in the open-source community. Unlike general chatbots, TranslateGemma focuses on language conversion. Weights are publicly available on Hugging Face and Vertex AI for local or cloud deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Architecture &amp;amp; Training
&lt;/h3&gt;

&lt;p&gt;Its core advantage comes from a unique "two-stage training" process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supervised Fine-Tuning (SFT): Utilizing high-quality human translation data and synthetic data generated by Gemini.&lt;/li&gt;
&lt;li&gt;Reinforcement Learning (RL): Further guided by translation reward models like MetricX-QE and AutoMQM to align with human preferences and semantic precision.&lt;/li&gt;
&lt;li&gt;Model Sizes: Available in 4B (mobile), 12B (laptop/workstation), and 27B (cloud).&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Key Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Supported Languages: 55 core languages with training on nearly 500 language pairs.
&amp;gt; PS: According to the tech report, Traditional Chinese-related pairs include EN/Cantonese -&amp;gt; Trad. Chinese and Trad. Chinese -&amp;gt; Cantonese.&lt;/li&gt;
&lt;li&gt;Multimodal Potential: Inherits Gemma 3's vision capabilities, enabling "visual translation" to understand text context in images (signs, menus, etc.).&lt;/li&gt;
&lt;li&gt;High Efficiency: The 12B version often outperforms larger unspecialized models in translation quality.
&amp;gt; Reminder: Max Context Window is 2k.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Performance &amp;amp; Applications
&lt;/h3&gt;

&lt;p&gt;Excelled in authority benchmarks like WMT24++. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The 12B version's quality (MetricX) even surpasses the general Gemma 3 27B model, proving the effectiveness of specialized training. &lt;/li&gt;
&lt;li&gt;Its flexibility makes it the best choice for balancing lightweight design and high quality in the open-source community.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Information
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Google Blog: &lt;a href="https://blog.google/innovation-and-ai/technology/developers-tools/translategemma/" rel="noopener noreferrer"&gt;https://blog.google/innovation-and-ai/technology/developers-tools/translategemma/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Huggingface: &lt;a href="https://huggingface.co/collections/google/translategemma" rel="noopener noreferrer"&gt;https://huggingface.co/collections/google/translategemma&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Tech Report: &lt;a href="https://arxiv.org/pdf/2601.09012" rel="noopener noreferrer"&gt;https://arxiv.org/pdf/2601.09012&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  II. Open Translate — Modern Interface for TranslateGemma
&lt;/h2&gt;

&lt;p&gt;Open Translate was created to bridge the gap in localization and privacy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhiexzbyv0aalh7caz7j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhiexzbyv0aalh7caz7j.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dqdnva57hqpjo4irc6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dqdnva57hqpjo4irc6k.png" alt=" " width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Technology Stack
&lt;/h3&gt;

&lt;p&gt;(1) Backend: FastAPI for high-performance async API. Uses Hugging Face transformers to call the translategemma-4b-it model with CUDA acceleration.&lt;br&gt;
(2) Frontend: React (Vite) + Bootstrap for a clean, modern UI with real-time previews.&lt;br&gt;
(3) Database: SQLite (SQLAlchemy) for translation history logs.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Highlights &amp;amp; Localization
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Multimodal Support: Image translation interface for screenshots, signs, or documents.&lt;/li&gt;
&lt;li&gt;Trad. Chinese Optimization: Integrated OpenCC to convert outputs into Taiwan-style phrasing and Traditional Chinese characters.&lt;/li&gt;
&lt;li&gt;Privacy &amp;amp; Security: Supports full offline deployment via Docker to ensure data remains internal.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Quick Start
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Google Colab: One-click script with Node.js/Python setup and ngrok access.
&lt;a href="https://colab.research.google.com/github/simonliu-ai-product/open-translate/blob/main/open_translate_project_workflow.ipynb" rel="noopener noreferrer"&gt;https://colab.research.google.com/github/simonliu-ai-product/open-translate/blob/main/open_translate_project_workflow.ipynb&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Docker Compose: Single command to run locally on NVIDIA GPUs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Github
&lt;/h3&gt;

&lt;p&gt;Link: &lt;a href="https://github.com/simonliu-ai-product/open-translate/tree/main" rel="noopener noreferrer"&gt;https://github.com/simonliu-ai-product/open-translate/tree/main&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  III. Conclusion
&lt;/h2&gt;

&lt;p&gt;The release of TranslateGemma proves that specialized small models can punch above their weight.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Democratizing Compute: 4B models provide professional quality on home laptops, lowering the barrier to entry.&lt;/li&gt;
&lt;li&gt;Multimodal is Future: Translation moves beyond text to direct visual understanding, changing how we interact with the world.&lt;/li&gt;
&lt;li&gt;Open Source Value: Combining Google's models with modern web frameworks enables rapid problem-solving.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Open Translate is just a starting point. I will continue to optimize localization and explore integration with AI Agents. Welcome to download the source code on GitHub, give it a Star, or test it via Colab!&lt;/p&gt;




&lt;h2&gt;
  
  
  I am Simon
&lt;/h2&gt;

&lt;p&gt;Hi everyone, I am Simon Liu, an AI Solutions Expert and a Google Developer Expert (GDE) in GenAI. I look forward to helping enterprises implement AI technologies. If this article was helpful, please give it a "Clap" on Medium and follow my account. Feel free to leave comments on my LinkedIn!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9l3ktv307rci0aif7rww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9l3ktv307rci0aif7rww.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My Personal Website: &lt;a href="https://simonliuyuwei.my.canva.site/link-in-bio" rel="noopener noreferrer"&gt;https://simonliuyuwei.my.canva.site/link-in-bio&lt;/a&gt;&lt;/p&gt;

</description>
      <category>google</category>
      <category>gemma</category>
      <category>ai</category>
      <category>opensource</category>
    </item>
    <item>
      <title>[AI Agent] TPU-Based AI Agent Development: Integrating the Twinkle AI Open Source Model (gemma-3–4B-T1-it) with Google ADK Tools</title>
      <dc:creator>Yu-Wei Simon Liu (Simon Liu)</dc:creator>
      <pubDate>Mon, 12 Jan 2026 02:44:31 +0000</pubDate>
      <link>https://dev.to/gde/ai-agent-tpu-based-ai-agent-development-integrating-the-twinkle-ai-open-source-model-18ji</link>
      <guid>https://dev.to/gde/ai-agent-tpu-based-ai-agent-development-integrating-the-twinkle-ai-open-source-model-18ji</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;Special Thanks: I would like to express my gratitude to &lt;a href="https://www.apmic.ai/" rel="noopener noreferrer"&gt;APMIC&lt;/a&gt; and the &lt;a href="https://discord.com/invite/Cx737yw4ed" rel="noopener noreferrer"&gt;Twinkle AI community&lt;/a&gt; for their assistance, which made the completion of this article possible.&lt;/p&gt;

&lt;p&gt;Original Chinese Post: &lt;a href="https://medium.com/@simon3458/twinkleai-gemma-3-t1-4b-adk-agent-d3309665f448" rel="noopener noreferrer"&gt;https://medium.com/@simon3458/twinkleai-gemma-3-t1-4b-adk-agent-d3309665f448&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As Large Language Model (LLM) technology enters a stage of maturity, the focus of developers has shifted from simple "conversation generation" to "AI Agents" capable of autonomous planning and execution. However, creating an Agent that understands Taiwan's local culture and can accurately execute complex tool calls presents two major challenges: first, general-purpose models often lack understanding of local regulations and context; second, the high cost of GPU computing power limits the widespread adoption of applications.&lt;/p&gt;

&lt;p&gt;This article will guide you through exploring: building AI Agent application services by combining the matrix computation advantages of Google TPU, Twinkle AI's gemma-3–4B-T1-it open-source model (optimized specifically for the Taiwanese context), and the Google ADK (Agent Development Kit).&lt;/p&gt;

&lt;p&gt;We will start with the underlying architecture of TPUs to explain why they are accelerators for AI inference. Next, we will introduce how Twinkle AI solves "alignment drift" and strengthens Function Calling capabilities. Finally, we will conduct a hands-on walkthrough using Google Colab, stacking and integrating these technologies to build an AI Agent from scratch that is responsive, understands Taiwanese linguistic habits, and can actually query stock information.&lt;/p&gt;




&lt;h2&gt;
  
  
  I. Introduction to Google TPU
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. What is a TPU (Tensor Processing Unit)?
&lt;/h3&gt;

&lt;p&gt;Google TPU is a "Domain-Specific Architecture" (DSA) integrated circuit tailored specifically for machine learning workloads. Unlike traditional processors that need to handle various general-purpose tasks, the core design of the TPU adopts a "Systolic Array" architecture. This design mimics the way a heart beats, allowing data to flow rhythmically between thousands of arithmetic units within the chip. This architecture enables the TPU to significantly reduce frequent memory access when performing matrix multiplication—the core operation of neural networks—thereby breaking through the "von Neumann bottleneck" of traditional computer architectures and achieving extremely high computational density and efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The Difference Between TPU and GPU
&lt;/h3&gt;

&lt;p&gt;The fundamental difference between the two lies in the philosophical opposition of "Specialization" vs. "Generalization."&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GPU (Graphics Processing Unit):&lt;/strong&gt; essentially a general-purpose parallel processor designed for graphics rendering. It retains a massive amount of control logic and cache to handle complex instruction streams, giving it high flexibility and a powerful CUDA software ecosystem suitable for highly variable research and applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;TPU:&lt;/strong&gt; sacrifices generality (it cannot efficiently handle non-matrix operations) and removes hardware units irrelevant to AI, dedicating all released resources to matrix operation units. This gives TPUs higher computational efficiency when processing large-scale static computation graphs in specific formats (such as bfloat16). However, the development barrier is higher than GPUs when dealing with dynamic control flows or custom operators, usually relying on the XLA compiler and JAX framework for optimization.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  II. Introduction to the Twinkle AI gemma-3–4B-T1-it Model
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop7j1ranjfd2pjfgl70l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fop7j1ranjfd2pjfgl70l.png" alt="Architecture overview and positioning of the Twinkle AI Gemma-3–4B-T1-it large language model optimized for the Taiwanese context" width="800" height="447"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
(Image Source: Huggingface)&lt;/p&gt;

&lt;p&gt;gemma-3-4B-T1-it is a 4B parameter model launched by Twinkle AI based on the Google Gemma 3 architecture. It aims to solve the "alignment drift" problem caused by uneven data in mainstream foundation models and to practice the concept of "Sovereign AI."&lt;/p&gt;

&lt;p&gt;The model has been deeply optimized for the Taiwanese context, correcting:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Vocabulary Misuse/Appropriation: (e.g., distinguishing between terms for "quality" and "mass").&lt;/li&gt;
&lt;li&gt;Legal and Institutional Hallucinations: (Citing current laws of the Republic of China rather than laws of the PRC).&lt;/li&gt;
&lt;li&gt;Cultural Meme Disconnects: (Understanding internet slang from communities like PTT and Dcard).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Through Gemma 3's "Local-Global Hybrid Attention Mechanism" and a 128K token context window, T1–4B-it achieves deep cultural alignment at a lightweight scale, positioning itself as a language model focused on Agent workflows and local needs.&lt;/p&gt;

&lt;p&gt;regarding dataset selection and ecosystem collaboration, T1–4B-it adopts a rigorous data strategy. Training data includes lianghsun/tw-reasoning-instruct (reasoning instructions) designed for the Taiwan context, nvidia/Nemotron (instruction following), lianghsun/tw-contract-review-chat (contract review), and Chain of Thought (CoT) data prepared by Kerg (such as tw_mm_R1). We thank APMIC for providing critical computing support for the infrastructure to make this possible.&lt;/p&gt;

&lt;p&gt;For architecture designed to strengthen Function Calling, T1–4B-it specifically introduces the Hermes Tool-call Parser format for training, equipping it with powerful Agent capabilities. The model can handle four levels of complex calling scenarios: single function, multiple functions, parallel functions, and parallel multiple functions. In the BFCL evaluation, the model achieved an overall accuracy of 84.5%, with performance on multiple Abstract Syntax Trees (AST) reaching as high as 89%. This demonstrates that at the 4B parameter magnitude, it possesses tool usage and automated execution capabilities surpassing many 7B or 13B models.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg721aoi8ptfi0q50g5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg721aoi8ptfi0q50g5i.png" alt="Example of Google ADK handling parallel function calling and tool execution within an AI Agent workflow" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Example: Using Google ADK to handle parallel function processing simultaneously.&lt;/p&gt;

&lt;p&gt;For detailed information, please visit HuggingFace:&lt;br&gt;&lt;br&gt;
&lt;a href="https://huggingface.co/twinkle-ai/gemma-3-4B-T1-it" rel="noopener noreferrer"&gt;twinkle-ai/gemma-3-4B-T1-it&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  III. Hands-on: Launching an AI Agent Service on Google Colab via VLLM and Google ADK
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;If you are not familiar with Google ADK tools, you can read this article first: &lt;a href="https://medium.com/@simon3458/google-adk-tools-intro-202504-3181fd6ab567" rel="noopener noreferrer"&gt;https://medium.com/@simon3458/google-adk-tools-intro-202504-3181fd6ab567&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqmtx41m5p96cc7rlr2m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqmtx41m5p96cc7rlr2m.png" alt="End-to-end architecture diagram showing deployment of a Gemma-3–4B-T1-it AI Agent on Google Colab with TPU, vLLM, LiteLLM, and Google ADK" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
(Thanks to Twinkle AI community friend Thomas for assisting with the graphics!)&lt;/p&gt;

&lt;p&gt;The main goal of this project is to deploy a Twinkle AI Gemma 3 T1 4B model in a Google Colab TPU v5e-1 environment and transform it into an AI Agent capable of executing specific tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://colab.research.google.com/github/LiuYuWei/gemma-t1-4b-adk-agent/blob/main/gemma-t1-4b-adk-agent-colab-workflow-20260107-v2.ipynb" rel="noopener noreferrer"&gt;Google Colab Notebook Link&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Environment Preparation and Dependency Installation
&lt;/h3&gt;

&lt;p&gt;Hardware Setup: Confirm the current execution environment is Google TPU v5e-1, hardware designed specifically to accelerate machine learning workloads.&lt;/p&gt;

&lt;p&gt;Core Packages: Install the vLLM inference engine that supports TPU acceleration; this is key to making the model run fast. Simultaneously install OpenAI SDK and LiteLLM for subsequent API connection and forwarding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fye7af80psukszlc14x5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fye7af80psukszlc14x5e.png" alt="Screenshot of dependency installation and environment setup steps in Google Colab for TPU-based AI inference" width="795" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Launching vLLM Inference Service
&lt;/h3&gt;

&lt;p&gt;Load Model: Start the vLLM server via terminal commands and load the Twinkle AI Gemma-3–4B-T1-it model.&lt;/p&gt;

&lt;p&gt;Enable Advanced Features: Configure parameters at startup to enable the model's "Auto Tool Choice" and "Hermes Tool Parser," giving the model the ability to understand and call external tools.&lt;/p&gt;

&lt;p&gt;Verify Service Status:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check if the model API is successfully online.&lt;/li&gt;
&lt;li&gt;Perform simple conversation tests to confirm the model responds normally.&lt;/li&gt;
&lt;li&gt;Critical Test: Test if the model can correctly parse "Function Calling" (e.g., asking about database structures to confirm the model returns the correct tool execution request).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5q6zdtru9g0gf2jbgq4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5q6zdtru9g0gf2jbgq4x.png" alt="Console output showing successful vLLM service startup and function calling verification results" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Setting up the LiteLLM API Bridge
&lt;/h3&gt;

&lt;p&gt;Configure Forwarding Rules: Create a configuration file to forward standard API requests to the backend vLLM service. This step is to standardize the model interface for compatibility with Google ADK.&lt;/p&gt;

&lt;p&gt;Start Proxy Service: Launch the LiteLLM proxy server in the background and monitor it until the service is fully ready (model list appears).&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Integrating Google ADK (Agent Development Kit)
&lt;/h3&gt;

&lt;p&gt;Get Agent Example: Download a pre-written Stock Query Agent example project from GitHub.&lt;/p&gt;

&lt;p&gt;Install Agent Dependencies: Install the Python packages required for the Agent project.&lt;/p&gt;

&lt;p&gt;Set Environment Variables: Configure the keys and API addresses needed for the Agent connection, pointing them to the LiteLLM service we just set up.&lt;/p&gt;

&lt;p&gt;Run and Test Agent: Launch the Google ADK command-line interface and actually converse with the Agent (e.g., asking for TSMC's stock price). At this point, the Agent will automatically determine the need, call the stock query tool, retrieve data, and finally use the Gemma model to generate a natural language response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqka1lsew6n46gnr45p6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqka1lsew6n46gnr45p6k.png" alt="Terminal interaction showing Google ADK Agent querying stock information and invoking tools automatically" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. (Optional) Establishing a Remote Development Tunnel
&lt;/h3&gt;

&lt;p&gt;Setup ngrok: Use the ngrok tool to expose the API service inside Colab to the public internet.&lt;/p&gt;

&lt;p&gt;Local Connection: This allows developers to develop the ADK frontend or logic on their local machine while leaving the heavy model inference computations to run on the TPU in Colab, achieving an efficient "Local Development, Cloud Inference" model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftb5emr9o8z5ipcepkxx4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftb5emr9o8z5ipcepkxx4.png" alt="Diagram illustrating use of ngrok to expose Colab-based AI Agent services for local development" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This process demonstrates the complete integration from underlying model deployment and mid-layer API forwarding to upper-layer Agent application logic, utilizing Google Colab's TPU computing power to build intelligent AI applications.&lt;/p&gt;




&lt;h2&gt;
  
  
  IV. Conclusion
&lt;/h2&gt;

&lt;p&gt;This hands-on exercise is not only a display of a technology stack but also verifies the huge potential of combining "Specialized Hardware" with "Localized Small Models."&lt;/p&gt;

&lt;p&gt;Through the specialized acceleration of Google TPU v5e, we proved that even a lightweight model at the 4B parameter level, when paired with high-quality localized instruction fine-tuning (such as the efforts of Twinkle AI Gemma-3-T1-it) and an appropriate inference framework (vLLM + Google ADK), can demonstrate logical reasoning and tool usage capabilities that transcend its size class.&lt;/p&gt;

&lt;p&gt;This solution offers three important insights for developers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Compute is no longer a high wall: TPUs provide an efficient alternative to GPUs. Through platforms like Colab, developers can access powerful matrix computing resources with a lower barrier to entry.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Localization is crucial: The performance of the Twinkle AI model proves that models which solve "cultural disconnects" and "regulatory hallucinations" are better suited for actual business and life scenarios—an advantage general-purpose models struggle to replace.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Standardization of Agent Development: The introduction of Google ADK and standardized APIs (LiteLLM) evolves Agent development from "hand-crafting Prompts" to modular engineering practices, significantly improving development efficiency and stability.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With the open-sourcing of the Google Gemma 3 architecture and the ubiquity of TPU cloud resources, we are on the eve of a blossoming of AI applications. I hope this tutorial helps developers in various fields quickly build intelligent assistants that understand local languages and solve real problems, truly realizing the democratization and innovation of AI technology.&lt;/p&gt;




&lt;h2&gt;
  
  
  I am Simon
&lt;/h2&gt;

&lt;p&gt;Hello everyone, I am Simon Liu (Liu Yu-wei), an AI Solutions Expert and currently a Google Developer Expert (AI Role). I look forward to helping enterprises implement Artificial Intelligence technologies to solve problems.&lt;/p&gt;

&lt;p&gt;If this article was helpful to you, please give it a clap on Medium and follow my personal account so you can read my future articles at any time. You are welcome to leave comments on my LinkedIn to provide feedback and discuss AI-related topics with me. I look forward to being of help to everyone!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8oti6it5wd2xc194jx5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8oti6it5wd2xc194jx5t.png" alt="Portrait photo of Simon Liu, AI Solutions Expert and Google GenAI Developer Expert" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My Personal Website:&lt;br&gt;&lt;br&gt;
&lt;a href="https://simonliuyuwei.my.canva.site/link-in-bio" rel="noopener noreferrer"&gt;https://simonliuyuwei.my.canva.site/link-in-bio&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gemma</category>
      <category>agents</category>
      <category>tpu</category>
      <category>google</category>
    </item>
  </channel>
</rss>
