<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: 冯键（FENG JIAN）</title>
    <description>The latest articles on DEV Community by 冯键（FENG JIAN） (@feng_jian_f8d0a9834be).</description>
    <link>https://dev.to/feng_jian_f8d0a9834be</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/feng_jian_f8d0a9834be"/>
    <language>en</language>
    <item>
      <title>🚀 Build an AI-Powered Test Automation Platform from Scratch</title>
      <dc:creator>冯键（FENG JIAN）</dc:creator>
      <pubDate>Mon, 04 Aug 2025 16:47:46 +0000</pubDate>
      <link>https://dev.to/feng_jian_f8d0a9834be/build-an-ai-powered-test-automation-platform-from-scratch-4p7i</link>
      <guid>https://dev.to/feng_jian_f8d0a9834be/build-an-ai-powered-test-automation-platform-from-scratch-4p7i</guid>
      <description>&lt;h2&gt;
  
  
  📌 Introduction
&lt;/h2&gt;

&lt;p&gt;Over the past decade, test automation has evolved significantly — from &lt;strong&gt;handwritten scripts&lt;/strong&gt;, to the &lt;strong&gt;Page Object Model&lt;/strong&gt;, and now to &lt;strong&gt;no-code platforms&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;But &lt;strong&gt;element selection&lt;/strong&gt; remains the most tedious and brittle part of UI testing.&lt;/p&gt;

&lt;p&gt;Frameworks like &lt;strong&gt;Selenium&lt;/strong&gt;, &lt;strong&gt;Playwright&lt;/strong&gt;, or &lt;strong&gt;Puppeteer&lt;/strong&gt; are great at automating actions like &lt;code&gt;.click()&lt;/code&gt; or &lt;code&gt;.type()&lt;/code&gt;. However, they still rely heavily on &lt;strong&gt;manual selectors&lt;/strong&gt; (XPath or CSS) to locate elements — and that’s where the pain lies.&lt;/p&gt;

&lt;p&gt;Thanks to &lt;strong&gt;AI&lt;/strong&gt; — especially &lt;strong&gt;large language models (LLMs)&lt;/strong&gt; — we now have the ability to separate:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;What to do&lt;/strong&gt; (natural language instruction)&lt;br&gt;
from&lt;br&gt;
&lt;strong&gt;Where to do it&lt;/strong&gt; (element selector)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In this tutorial, you’ll build a working solution using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🧠 &lt;strong&gt;Talk2Dom&lt;/strong&gt; — to convert natural language into precise element selectors&lt;/li&gt;
&lt;li&gt;🧪 &lt;strong&gt;Selenium&lt;/strong&gt; — to execute actions in a real browser&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This means you can now write a test like:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Find the login button”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;…and watch it execute automatically.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ Quickstart: One-Click Setup with Docker Compose
&lt;/h2&gt;

&lt;p&gt;After cloning the &lt;a href="https://github.com/itbanque/talk2dom" rel="noopener noreferrer"&gt;Talk2Dom repository&lt;/a&gt;, everything is pre-configured.&lt;/p&gt;

&lt;p&gt;Simply run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This starts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Talk2Dom backend (API server)&lt;/li&gt;
&lt;li&gt;A database store all projects/tokens&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once up and running, you’re ready to test!&lt;/p&gt;




&lt;h2&gt;
  
  
  🔁 End-to-End Test Script (&lt;code&gt;e2e.py&lt;/code&gt;)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;selenium&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;webdriver&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;selenium.webdriver.common.by&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;By&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;selenium.webdriver.common.desired_capabilities&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;DesiredCapabilities&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;selenium.webdriver.remote.remote_connection&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RemoteConnection&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;

&lt;span class="c1"&gt;# 1. Start Selenium Chrome
&lt;/span&gt;&lt;span class="n"&gt;driver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;webdriver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Remote&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;command_executor&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;RemoteConnection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;http://localhost:4444/wd/hub&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;resolve_ip&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;desired_capabilities&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;DesiredCapabilities&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CHROME&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://example.com&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Wait for the page to load
&lt;/span&gt;&lt;span class="n"&gt;html&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;page_source&lt;/span&gt;

&lt;span class="c1"&gt;# 2. Call Talk2Dom API to locate the element
&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;x-api-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your_api_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;x-project-id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your_project_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http://localhost:8000/api/v1/inference/locator&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;instruction&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;find the login button&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;html&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;html&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;current_url&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;selector&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;selector_value&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# 3. Use Selenium to perform the action
&lt;/span&gt;&lt;span class="n"&gt;el&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find_element&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;By&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CSS_SELECTOR&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;selector&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;el&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;click&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;✅ Test completed.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;driver&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;quit&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  🔐 Security Best Practices
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;✅ Deploy behind an internal firewall (optional)&lt;/li&gt;
&lt;li&gt;✅ Limit HTML payload size to prevent injection attacks&lt;/li&gt;
&lt;li&gt;✅ Use API key authentication if the service is public&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📊 Conclusion
&lt;/h2&gt;

&lt;p&gt;By combining:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Talk2Dom&lt;/strong&gt; for natural language → selector translation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Selenium&lt;/strong&gt; for real browser interaction&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…we decouple &lt;em&gt;what&lt;/em&gt; you want to test from &lt;em&gt;how&lt;/em&gt; it’s implemented.&lt;/p&gt;

&lt;p&gt;This clean separation leads to a powerful and lightweight AI-assisted test automation workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  🔥 Benefits
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Boosts developer productivity&lt;/li&gt;
&lt;li&gt;Enables QA teams to write tests in plain English&lt;/li&gt;
&lt;li&gt;Lays the groundwork for a low-code automation platform&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Fine-Tuning Whisper for Japanese-to-Chinese Speech Translation — A Lightweight Approach</title>
      <dc:creator>冯键（FENG JIAN）</dc:creator>
      <pubDate>Sun, 15 Jun 2025 18:49:13 +0000</pubDate>
      <link>https://dev.to/feng_jian_f8d0a9834be/fine-tuning-whisper-for-japanese-to-chinese-speech-translation-a-lightweight-approach-32if</link>
      <guid>https://dev.to/feng_jian_f8d0a9834be/fine-tuning-whisper-for-japanese-to-chinese-speech-translation-a-lightweight-approach-32if</guid>
      <description>&lt;p&gt;OpenAI’s Whisper is well known for its robust multilingual transcription and English-targeted translation. But what if we want to directly translate Japanese speech into Chinese? In this project, I adapted Whisper’s &lt;strong&gt;tiny&lt;/strong&gt; and &lt;strong&gt;base&lt;/strong&gt; models to perform &lt;strong&gt;Japanese-to-Chinese speech translation&lt;/strong&gt; — a task Whisper doesn’t support out of the box.&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯 Motivation
&lt;/h2&gt;

&lt;p&gt;Japanese media like anime, drama, and films are hugely popular among Chinese-speaking audiences. However, most existing translation pipelines either route through English or require large GPU resources.&lt;/p&gt;

&lt;p&gt;I wanted to explore a &lt;strong&gt;low-resource&lt;/strong&gt; solution that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Translates directly from Japanese to Chinese&lt;/li&gt;
&lt;li&gt;Can run on CPU-only or edge devices&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📁 Dataset: ScreenTalk-JA2ZH
&lt;/h2&gt;

&lt;p&gt;To fine-tune Whisper, I created a domain-specific dataset of &lt;strong&gt;Japanese audiovisual content with aligned Chinese subtitles&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🎬 Domains: Japanese films, TV dramas, anime&lt;/li&gt;
&lt;li&gt;⏱️ Size: 582h train / 73h val / 73h test&lt;/li&gt;
&lt;li&gt;📎 Format: 16kHz mono WAV + Simplified Chinese subtitles&lt;/li&gt;
&lt;li&gt;✅ Sentence-level alignment, cleaned and manually verified&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🔗 A smaller version is publicly available:&lt;br&gt;&lt;br&gt;
👉 &lt;a href="https://huggingface.co/datasets/Itbanque/ScreenTalk_JA2ZH-XS" rel="noopener noreferrer"&gt;ScreenTalk-JA2ZH-XS on Hugging Face&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ Fine-Tuning Setup
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Hyperparameter&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Epochs&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Learning rate&lt;/td&gt;
&lt;td&gt;3e-4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Precision&lt;/td&gt;
&lt;td&gt;fp16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Batch size (tiny/base)&lt;/td&gt;
&lt;td&gt;96 / 64&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Eval strategy&lt;/td&gt;
&lt;td&gt;Step-based&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Early stopping&lt;/td&gt;
&lt;td&gt;Patience = 5&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;We fine-tuned both &lt;strong&gt;Whisper tiny&lt;/strong&gt; and &lt;strong&gt;Whisper base&lt;/strong&gt; using the same training pipeline.&lt;/p&gt;

&lt;h2&gt;
  
  
  📈 Results
&lt;/h2&gt;

&lt;h3&gt;
  
  
  🔸 Whisper Tiny
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;✅ Lightweight, fast&lt;/li&gt;
&lt;li&gt;❌ BLEU ≈ 0.60&lt;/li&gt;
&lt;li&gt;❌ Prone to overfitting and semantic drift in long/complex speech&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  🔹 Whisper Base
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;✅ BLEU = &lt;strong&gt;0.7179&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;✅ Stronger generalization and fluency&lt;/li&gt;
&lt;li&gt;✅ Suitable for CPU deployment (edge ready)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;👉 BLEU scores steadily improved even when token-level loss increased — highlighting that loss is not always a good proxy for translation quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  🤔 Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Whisper &lt;strong&gt;can&lt;/strong&gt; be adapted for &lt;strong&gt;non-English language pairs&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Domain-specific data (like anime or TV) greatly improves model performance&lt;/li&gt;
&lt;li&gt;Model capacity matters: Tiny is efficient but not enough for expressive, noisy domains&lt;/li&gt;
&lt;li&gt;BLEU is limited — future work should include COMET, chrF, or human evals&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🔮 What’s Next?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Fine-tune &lt;strong&gt;larger Whisper models&lt;/strong&gt; (medium, large)
&lt;/li&gt;
&lt;li&gt;Try &lt;strong&gt;LoRA&lt;/strong&gt; or other parameter-efficient tuning techniques
&lt;/li&gt;
&lt;li&gt;Expand dataset to cover conversational, technical, and news speech&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🚀 Try It Out
&lt;/h2&gt;

&lt;p&gt;🧠 Models available on Hugging Face:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/Itbanque/whisper-ja-zh-tiny" rel="noopener noreferrer"&gt;Whisper Tiny (JA→ZH)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/Itbanque/whisper-ja-zh-base" rel="noopener noreferrer"&gt;Whisper Base (JA→ZH)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Thanks to the open-source Whisper community and everyone working to break language barriers with AI.&lt;/p&gt;

&lt;p&gt;👉 Follow me for more multilingual AI experiments!&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
