<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: srini047</title>
    <description>The latest articles on DEV Community by srini047 (@srini047).</description>
    <link>https://dev.to/srini047</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/srini047"/>
    <language>en</language>
    <item>
      <title>Skills-i8n: When your Skills become i8n friendly ☸️</title>
      <dc:creator>srini047</dc:creator>
      <pubDate>Sat, 14 Mar 2026 08:55:31 +0000</pubDate>
      <link>https://dev.to/srini047/skills-i8n-when-your-skills-become-multilingual-4l0h</link>
      <guid>https://dev.to/srini047/skills-i8n-when-your-skills-become-multilingual-4l0h</guid>
      <description>&lt;h2&gt;
  
  
  Introduction 🫥
&lt;/h2&gt;

&lt;p&gt;The recent addition to the current generation of Agentic AI is &lt;a href="https://skills.sh" rel="noopener noreferrer"&gt;Skills&lt;/a&gt;. Instead of a formal definition, the analogy is closely related to skills that humans possess, which provide them with an edge over others, but not for agents. This is a nascent space since most of the teams and organizations are creating the required skills for their product. This looks like the right time to introduce the ideology of &lt;a href="https://phrase.com/blog/posts/i18n-a-simple-definition/#what-is-internationalization-i18n" rel="noopener noreferrer"&gt;i8n&lt;/a&gt; to Skills. Let's understand in depth the need and solution for the same.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why i8n for skills❓
&lt;/h2&gt;

&lt;p&gt;Let's understand the primary reasons that you will be missing if skills aren't internationalized:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Agent skills are instructions&lt;/strong&gt;: if your team or users don't read the default, the skill is useless regardless of how well it's written.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language Unification&lt;/strong&gt; — an agent skill in Japanese, Hindi, or Arabic unlocks entire engineering communities that would otherwise skip it entirely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical details survive translation&lt;/strong&gt; — with a proper glossary, terms like MCP, SKILL.md, and API stay intact while the surrounding instructions become locally natural.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Skills out of sync without i18n infrastructure&lt;/strong&gt; — When the source skill updates, translated copies fall out of sync. A standardised tool makes re-translation a one-command operation, not a manual effort.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent ecosystems are Global 🌍&lt;/strong&gt; — Claude, Cursor, and other agents are deployed worldwide. Skills that can't localise are a ceiling on how far an agent platform can actually reach.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where the &lt;strong&gt;&lt;a href="https://github.com/srini047/skills-i8n/tree/master" rel="noopener noreferrer"&gt;skills-i8n&lt;/a&gt;&lt;/strong&gt; project or tool comes to the rescue. It easily fits into the current skills framework without needing any fancy plugins and heavy dependencies. Get your free API key at &lt;a href="https://lingo.dev/" rel="noopener noreferrer"&gt;lingo.dev&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Features of the tool ✨
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Async concurrency with semaphore limit&lt;/strong&gt; — all skills translate in parallel using &lt;code&gt;asyncio&lt;/code&gt;, bounded by &lt;code&gt;--concurrency&lt;/code&gt; (default 3, max 10). A 10-skill repo doesn't make 10 simultaneous API calls — it makes 3 at a time, preventing rate-limit errors while still being significantly faster than sequential translation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One request, full context&lt;/strong&gt; — name, description, and the entire body travel as a single flat payload. The Lingo.dev engine applies Glossary, Brand Voice, and Instructions holistically across the whole skill, not on separate strings.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero SDK dependency&lt;/strong&gt; — talks directly to &lt;code&gt;api.lingo.dev&lt;/code&gt; via pure &lt;code&gt;httpx&lt;/code&gt;. No third-party wrapper that can silently break or lag behind the real API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repo-aware, incremental by default&lt;/strong&gt; — discovers skills across all known agent layouts (.claude/skills/, skills/*/, flat), skips already-translated files, copies companion scripts and assets untouched. Re-run safely after any source skill update.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Engine ID unlocks production-grade localization&lt;/strong&gt; — pass &lt;code&gt;--engine-id eng_xxx&lt;/code&gt; and every translation inherits your org's configured glossary, brand voice, and per-locale LLM selection — all without touching the code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup 🔌
&lt;/h2&gt;

&lt;p&gt;Follow these steps to get your first translation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install uv&lt;/span&gt;
curl &lt;span class="nt"&gt;-LsSf&lt;/span&gt; https://astral.sh/uv/install.sh | sh

&lt;span class="c"&gt;# Clone repository&lt;/span&gt;
git clone https://github.com/srini047/skills-i18n
&lt;span class="nb"&gt;cd &lt;/span&gt;skills-i18n

&lt;span class="c"&gt;# Install dependencies&lt;/span&gt;
uv &lt;span class="nb"&gt;sync&lt;/span&gt;

&lt;span class="c"&gt;# Set environment variables&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; .env.example .env
&lt;span class="c"&gt;# OR&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;LINGODOTDEV_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;your_key_here
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🎉 You have successfully installed the tool and are ready to proceed with your first translation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Workflow 🏗️
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2miumwea8kos4wlqp1e7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2miumwea8kos4wlqp1e7.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tool usage 🛠️
&lt;/h2&gt;

&lt;p&gt;Now that we have all set up, let's understand what this tool has to offer you:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;➜  skills-i18n git:(master) uv run skills-i8n --help

 Usage: skills-i8n [OPTIONS] COMMAND [ARGS]...                                                                                    

 🌐 i8n for AI Agent Skills — powered by Lingo.dev                                                                                

╭─ Options ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --help          Show this message and exit.                                                                                    │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Commands ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ translate     Translate all SKILL.md files in a skills repository.                                                             │
│ scan          Scan a skills repository and list discovered SKILL.md files.                                                     │
│ list-locales  List all locales supported by Lingo.dev.                                                                         │
│ detect        Detect the source language of a SKILL.md file.                                                                   │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Translate 💬
&lt;/h3&gt;

&lt;p&gt;Core implementation of the tool, where the SKILL.md is translated from the detected language to the target language. It preserves the directory structure.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;➜  skills-i18n git:(master) uv run skills-i8n translate ./example_skills de 
  ____  _    _ _ _           _  ___        
 / ___|| | _(_) | |___      (_)( _ ) _ __  
 \___ \| |/ / | | / __|_____| |/ _ \| '_ \ 
  ___) |   &amp;lt;| | | \__ \_____| | (_) | | | |
 |____/|_|\_\_|_|_|___/     |_|\___/|_| |_|

Skill i18n · powered by Lingo.dev

╭──────────────────────────────────────────────── 🌐 skills-i8n Translation Job ─────────────────────────────────────────────────╮
│ Repository: /Users/apple/Documents/skills-i18n/example_skills                                                                  │
│ Target:     de · German                                                                                                        │
│ Source:     en                                                                                                                 │
│ Output:     /Users/apple/Documents/skills-i18n/example_skills/i8n                                                              │
│ Engine:     eng_L7DVg4T8pYoYu8BJ                                                                                               │
│ Threads:    3                                                                                                                  │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Found 3 skill(s) to translate:

  • code-review (code-review/SKILL.md)
  • pdf-processing (pdf-processing/SKILL.md)
  • readme-writer (readme-writer/SKILL.md)

  ⏭ readme-writer ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00

            📊 Translation Results — de (German)            
┏━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Skill          ┃ Status     ┃ Output Path                ┃
┡━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ code-review    │ ⏭  skipped │ de/code-review/SKILL.md    │
│ pdf-processing │ ⏭  skipped │ de/pdf-processing/SKILL.md │
│ readme-writer  │ ⏭  skipped │ de/readme-writer/SKILL.md  │
└────────────────┴────────────┴────────────────────────────┘

✓ 3 succeeded
Output: /Users/apple/Documents/skills-i18n/example_skills/i8n
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xz4zonoglk1kppu7e84.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xz4zonoglk1kppu7e84.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Scan 🔍
&lt;/h3&gt;

&lt;p&gt;Before translating, it's advised that you execute the scan command, which lists the relative path of the &lt;code&gt;SKILL.md&lt;/code&gt; file that would be translated. It provides a tabular output for quick reference that helps you decide beforehand if any modification(s) is/are required.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;uv run skills-i8n scan ./example_skills        
  ____  _    _ _ _           _  ___        
 / ___|| | _(_) | |___      (_)( _ ) _ __  
 \___ \| |/ / | | / __|_____| |/ _ \| '_ \ 
  ___) |   &amp;lt;| | | \__ \_____| | (_) | | | |
 |____/|_|\_\_|_|_|___/     |_|\___/|_| |_|

Skill i18n · powered by Lingo.dev

                                                   📁 Skills in example_skills                                                    
┏━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Name           ┃ Description                                                            ┃ License    ┃ Path                    ┃
┡━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ code-review    │ Performs thorough code reviews covering bugs, security                 │ Apache-2.0 │ code-review/SKILL.md    │
│                │ vulnerabilities, performa…                                             │            │                         │
│ pdf-processing │ Extract text and tables from PDF files, fill PDF forms, merge and      │ Apache-2.0 │ pdf-processing/SKILL.md │
│                │ split document…                                                        │            │                         │
│ readme-writer  │ Creates and writes professional README.md files for software projects. │ Apache-2.0 │ readme-writer/SKILL.md  │
│                │ Use when …                                                             │            │                         │
└────────────────┴────────────────────────────────────────────────────────────────────────┴────────────┴─────────────────────────┘

3 skill(s) found.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  List Locales📝
&lt;/h3&gt;

&lt;p&gt;Currently, &lt;code&gt;lingo.dev&lt;/code&gt; supports an array of locales, and I have tried to port in most of them, which ideally means some are pending for sure. Feel free to reach out to me in case a target language you are looking for is missing incase.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;➜  skills-i18n git:(master) ✗ uv run skills-i8n list-locales                 
  ____  _    _ _ _           _  ___        
 / ___|| | _(_) | |___      (_)( _ ) _ __  
 \___ \| |/ / | | / __|_____| |/ _ \| '_ \ 
  ___) |   &amp;lt;| | | \__ \_____| | (_) | | | |
 |____/|_|\_\_|_|_|___/     |_|\___/|_| |_|

Skill i18n · powered by Lingo.dev

      🌍 Supported Locales (83+)      
┏━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Code       ┃ Language              ┃
┡━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━┩
│ af         │ Afrikaans             │
│ sq         │ Albanian              │
│ am         │ Amharic               │
│ ar         │ Arabic                │
│            │                       │
│ ...        │ ...                   │
│            │                       │
│ yo         │ Yoruba                │
│ zu         │ Zulu                  │
└────────────┴───────────────────────┘

83 locale(s) shown.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Detect 🕵️
&lt;/h3&gt;

&lt;p&gt;You can also detect the language of the current &lt;code&gt;SKILL.md&lt;/code&gt; file using the detect command. This will help you translate to the right target language and save time on debugging why your translation didn't succeed 😅&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight console"&gt;&lt;code&gt;&lt;span class="go"&gt;➜  skills-i18n git:(master) ✗ uv run skills-i8n detect ./example_skills/readme-writer/SKILL.md 
  ____  _    _ _ _           _  ___        
 / ___|| | _(_) | |___      (_)( _ ) _ __  
 \___ \| |/ / | | / __|_____| |/ _ \| '_ \ 
  ___) |   &amp;lt;| | | \__ \_____| | (_) | | | |
 |____/|_|\_\_|_|_|___/     |_|\___/|_| |_|

Skill i18n · powered by Lingo.dev

╭──────────────────────────────────────────────────── 🔍 Language Detection ─────────────────────────────────────────────────────╮
│ Skill:           readme-writer                                                                                                 │
│ Detected locale: en — English                                                                                                  │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Traversing the code 🧑‍💻
&lt;/h2&gt;

&lt;p&gt;Keeping line-by-line code aside, summarized the overall repository structure and what each part achieves in its entirety.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── example_skills
│   ├── i8n                         # Directory containing translated SKILL.md files (organized by locale)
│   │   ├── de
│   │   │   ├── code-review
│   │   │   │   └── SKILL.md
│   │   │   ├── pdf-processing
│   │   │   │   └── SKILL.md
│   │   │   └── readme-writer
│   │   │       └── SKILL.md
│   │   └── es
│   │       ├── code-review
│   │       │   └── SKILL.md
│   │       ├── pdf-processing
│   │       │   └── SKILL.md
│   │       └── readme-writer
│   │           └── SKILL.md
│   ├── pdf-processing
│   │   └── SKILL.md
│   └── readme-writer
│       └── SKILL.md
├── LICENSE
├── pyproject.toml                  # Project configuration and dependencies
├── README.md
├── skills_i8n
│   ├── __init__.py
│   ├── cli.py                      # Command-line interface for the tool
│   ├── parser.py                   # Parses SKILL.md files into structured data
│   ├── repo.py                     # Orchestrates skill discovery and parsing
│   └── translator.py               # Translation module with lingo.dev interaction
└── uv.lock                         # Lock file for dependencies
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Next steps
&lt;/h2&gt;

&lt;p&gt;I don't want to call it a &lt;code&gt;Conclusion&lt;/code&gt;; instead, I would want you to give the tool a try as the next step. Furthermore, to improve this tool further, I have some ideas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Make it part of the RFC standard of Agent Skills.&lt;/li&gt;
&lt;li&gt;Integrate AI reviewer to validate Lingo.dev's localization engines' output statistics and confident translation results rendered.&lt;/li&gt;
&lt;li&gt;Integrate as part of the CI/CD pipeline and also make it available as a git hook.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since both of us have next steps, let us sign off and work on those steps. Until then, this is &lt;a href="https://sriniketh.vercel.app" rel="noopener noreferrer"&gt;Sriniketh J | srini047&lt;/a&gt; signing off. Hope you had a fruitful time reading this article.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>nlp</category>
      <category>python</category>
    </item>
    <item>
      <title>GitHub Copilot: Assistant for my current Python workflow</title>
      <dc:creator>srini047</dc:creator>
      <pubDate>Fri, 06 Mar 2026 17:15:37 +0000</pubDate>
      <link>https://dev.to/srini047/github-copilot-assistant-for-my-current-python-workflow-2phm</link>
      <guid>https://dev.to/srini047/github-copilot-assistant-for-my-current-python-workflow-2phm</guid>
      <description>&lt;p&gt;I have split my development workflow into three phases and will explain each of them in detail:&lt;/p&gt;

&lt;h2&gt;
  
  
  Development
&lt;/h2&gt;

&lt;p&gt;This is the primary and most important phase for any developer. This can be further categorized into bugs and features, but for simplicity, let us stick to a common development workflow.&lt;br&gt;
AI needs proper, enough, and precise context to produce the best results. In our case, we must provide the feature specifications or the bug details (possibly repro and collect logs) and feed these details to the assistant. Brownie points if you could attach the relevant functions instead of providing the entire codebase for fruitful results.&lt;/p&gt;

&lt;p&gt;Primarily for feature implementation, if you could share any reference (ideally similar flow in the codebase) that would be the best for your assistant to refer to instead of hallucinating across its implementation.&lt;/p&gt;

&lt;p&gt;Sharing an example prompt that I used for a small feature implementation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Add count with filtering operations to the QdrantDocumentStore

`count_documents_by_filter`: count documents matching a filter
`get_metadata_fields_info`: get metadata field names and their types
`get_metadata_field_min_max`: get min/max values for numeric/date fields
`count_unique_metadata_by_filter`: count unique values per metadata field with filtering
`get_metadata_field_unique_values`: get paginated unique values for a metadata field  &amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt; Detailed explanation about each function

Both sync and async versions. Also, add integration tests for all new operations (sync and async) &amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt; Testing

Check `class WeaviateDocumentStore()` for reference &amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt;&amp;lt; Provide sample reference
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;One interesting thing I have encountered is regarding code formatting and static type checking. Whatever model you choose, the output delivered will be in the format the model has been trained. Hence, the solution would be to provide the &lt;code&gt;pyproject.toml&lt;/code&gt;, which has ruff, lint, static-type checking options or definitions.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Use the following directions to format the code:

[tool.hatch.envs.default.scripts]
[tool.hatch.envs.test.scripts]
[tool.ruff.lint]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Best practice is to use this prompt after code generation, so that you preserve context and also allow the model to focus more on logic rather than cosmetic changes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Documentation
&lt;/h2&gt;

&lt;p&gt;This phase is the easiest and can save a ton of your time if utilized properly. Instead of writing from scratch, you can ask the assistant to generate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docstrings&lt;/li&gt;
&lt;li&gt;API documentation&lt;/li&gt;
&lt;li&gt;Usage Example&lt;/li&gt;
&lt;li&gt;Release Notes
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Write a changelog entry for this feature.

Feature: metadata filtering operations in QdrantDocumentStore

Include:
- summary
- new APIs added
- backward compatibility notes
- sample minimal usage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This approach ensures that documentation stays consistent, structured, and updated alongside code changes. Most importantly, you can generate the documentation for the older code base, which is a golden asset, and not just for the newer/updated code.&lt;/p&gt;
&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;Testing is another niche area where AI assistants excel and push the limits of the code. Instead of manually writing test suites/cases, you can ask the assistant to generate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Unit tests&lt;/li&gt;
&lt;li&gt;Integration tests&lt;/li&gt;
&lt;li&gt;Edge case&lt;/li&gt;
&lt;li&gt;Mock APIs&lt;/li&gt;
&lt;li&gt;Sync/Async-based testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sample prompt:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Write test cases for the APIs:

count_documents_by_filter
get_metadata_fields_info
get_metadata_field_min_max
count_unique_metadata_by_filter
get_metadata_field_unique_values

- cover both sync and async versions
- include realistic metadata examples
- validate correct filtering behavior
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Using AI for testing ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;better code coverage even before hitting the codecov tools&lt;/li&gt;
&lt;li&gt;faster test case generation&lt;/li&gt;
&lt;li&gt;fewer overlooked edge cases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sharing an example session that I used while adding support for a small feature that shows the request-response between the assistant and me:&lt;br&gt;


&lt;/p&gt;
&lt;div class="ltag-agent-session"&gt;
  &lt;div class="agent-session-header"&gt;
    
      
      
      
    
    &lt;span class="agent-session-tool-icon-badge" title="GitHub Copilot"&gt;
  
  

&lt;/span&gt;
    &lt;span class="agent-session-title"&gt;GitHub Copilot Session&lt;/span&gt;
  &lt;/div&gt;

  &lt;div class="agent-session-scroll"&gt;
  &lt;/div&gt;

  &lt;div class="agent-session-footer"&gt;
    &lt;span class="agent-session-meta"&gt;
        0 of 0 messages
    &lt;/span&gt;
  &lt;/div&gt;
&lt;/div&gt;





&lt;p&gt;Final thoughts&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Always remember, you are the reviewer of the response from the assistant. You can depend on AI and shouldn't become dependent on AI.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>githubcopilot</category>
      <category>ai</category>
      <category>python</category>
      <category>tooling</category>
    </item>
    <item>
      <title>New year🗓️ New Portfolio🌐 Re🔁Fresh start⏭️</title>
      <dc:creator>srini047</dc:creator>
      <pubDate>Sat, 17 Jan 2026 17:31:40 +0000</pubDate>
      <link>https://dev.to/srini047/re-fresh-17ip</link>
      <guid>https://dev.to/srini047/re-fresh-17ip</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://dev.to/challenges/new-year-new-you-google-ai-2025-12-31"&gt;New Year, New You Portfolio Challenge Presented by Google AI&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  About Me
&lt;/h2&gt;

&lt;p&gt;I'm a passionate Software engineer and Machine Learning enthusiast. Currently working as an SWE at Arrcus Networks. I have previously worked at Zoho, Audio ML side of things, and as a Software Developer Intern at naas.ai&lt;/p&gt;

&lt;p&gt;With a strong foundation in data science, networking, and full-stack development, I enjoy building intelligent systems and scalable web applications. I'm a lifelong learner who believes in practical implementation through building projects and participating in hackathons.&lt;/p&gt;

&lt;p&gt;In my free time, I contribute to open-source projects, write technical blogs, and explore deep learning and web development. I'm currently learning German and love playing football whenever I get the chance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Portfolio
&lt;/h2&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__cloud-run"&gt;
  &lt;iframe height="600px" src="https://portfolio-48663427247.asia-south1.run.app"&gt;
  &lt;/iframe&gt;
&lt;/div&gt;




&lt;h2&gt;
  
  
  How I Built It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Next.js&lt;/strong&gt; — Structured the app with a fast, scalable React framework&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Antigravity&lt;/strong&gt; — Orchestrated backend logic and agent workflows&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini 2.5 Flash (AI Studio)&lt;/strong&gt; — Powered real-time AI reasoning and responses&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ElevenLabs Agent&lt;/strong&gt; — Enabled natural, expressive voice interactions. LLM using Gemini 2.5 Flash&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Google Cloud Run&lt;/strong&gt; — Deployed and scaled the app serverlessly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tailwind + shadcn&lt;/strong&gt; — Designed a clean, modern UI with reusable components&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Email.js&lt;/strong&gt; - Seamless contact form page integration with just a few steps.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What I'm Most Proud Of
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;First and foremost, before any statistical datapoint, I wanted to have some change personally in different aspects of my lifestyle and hence a makeover of my portfolio.&lt;/li&gt;
&lt;li&gt;Achieved &lt;strong&gt;100/100 Lighthouse Performance&lt;/strong&gt; on multiple factors of Next.js production build&lt;/li&gt;
&lt;li&gt;Recorded &lt;strong&gt;0.3s FCP&lt;/strong&gt; and &lt;strong&gt;0.4s LCP&lt;/strong&gt;, far exceeding Core Web Vitals targets&lt;/li&gt;
&lt;li&gt;Maintained &lt;strong&gt;0 ms Total Blocking Time&lt;/strong&gt; with near-zero main-thread work&lt;/li&gt;
&lt;li&gt;Kept &lt;strong&gt;Cumulative Layout Shift under 0.005&lt;/strong&gt;, ensuring visual stability&lt;/li&gt;
&lt;li&gt;Scored &lt;strong&gt;100 SEO&lt;/strong&gt; and &lt;strong&gt;96 Best Practices&lt;/strong&gt;, meeting production-grade standards&lt;/li&gt;
&lt;li&gt;Improved performance from &lt;strong&gt;83 → 100&lt;/strong&gt; by optimizing images, JS payloads, and caching&lt;/li&gt;
&lt;li&gt;Delivered a fully AI-powered app while remaining &lt;strong&gt;sub-second load time&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;React:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo882lfduyyo1j2eyl4ei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo882lfduyyo1j2eyl4ei.png" alt="Lighthouse-v1-React"&gt;&lt;/a&gt;&lt;br&gt;
NextJS:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym5jhmygkt3q9s8d26j0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fym5jhmygkt3q9s8d26j0.png" alt="Lighthouse-v2-Next"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Next?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Improve the accessibility since the website would be viewed for a few seconds and a maximum of a minute. So, providing most of the user's time is important.&lt;/li&gt;
&lt;li&gt;Need some more thought process to make the landing page still close to what I do.&lt;/li&gt;
&lt;li&gt;Integrate my demo videos and photos of hackathons as reels/shorts.&lt;/li&gt;
&lt;li&gt;Last but not least, keep doing more to add more content to the site.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>portfolio</category>
      <category>gemini</category>
    </item>
    <item>
      <title>IRIS Dataset Implementation</title>
      <dc:creator>srini047</dc:creator>
      <pubDate>Sat, 11 Sep 2021 13:53:26 +0000</pubDate>
      <link>https://dev.to/hackthisfall/iris-dataset-implementation-2m7i</link>
      <guid>https://dev.to/hackthisfall/iris-dataset-implementation-2m7i</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;IRIS DataSet consists of 150 rows and 5 columns where the columns are namely sepal length in cm, sepal width in cm, petal length in cm, petal width in cm, and Species. There are 3 species namely Iris Setosa, Iris Versicolor, and Iris Virginica.&lt;/p&gt;

&lt;p&gt;Dataset can be accessed through this &lt;a href="https://archive-beta.ics.uci.edu/ml/datasets/Iris" rel="noopener noreferrer"&gt;Link&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Another feature of this dataset is it has the right proportion of data for each species. Meaning it has 50 rows of data for each Species. Also, this dataset is very clean and hence liked by most beginners to start with ML.&lt;/p&gt;

&lt;p&gt;These are the 3 different species of the IRIS Dataset pictorially.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0f59nqusyim31emistu5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0f59nqusyim31emistu5.PNG" alt="Different IRIS Species" width="800" height="299"&gt;&lt;/a&gt;&lt;br&gt;
Ref: Different IRIS Species&lt;/p&gt;

&lt;p&gt;Now let's get our hands dirty by coding. For which you require Google Colab, Visual Studio Code (or any IDE), Dataset (downloaded from the above link in the form of CSV), and excitement to learn something.&lt;/p&gt;
&lt;h3&gt;
  
  
  Framing the Problem
&lt;/h3&gt;

&lt;p&gt;Our ultimate goal is to describe the Species to which the flower belongs provided length and width of sepal and petal. &lt;/p&gt;
&lt;h2&gt;
  
  
  Data Preprocessing and Visualization
&lt;/h2&gt;

&lt;p&gt;Importing Necessary Libraries&lt;br&gt;
Initially let us import all the necessary libraries that are required to run the code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import numpy as np
import pandas as pd

import seaborn as sns

from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we import NumPy, Pandas, Matplotlib, Seaborn, and other necessary modules from Sklearn.&lt;/p&gt;

&lt;p&gt;After importing the necessary libraries let's import the IRIS Dataset that we are going to work on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Importing Dataset
&lt;/h2&gt;

&lt;p&gt;To import, we have to use the pandas library which has the 'read_csv' function which takes the file path to import the necessary dataset.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data  = pd.read_csv('iris_data.csv')
data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After importing if we print the data we find that there are 150 rows and 5 columns. Further, if we dig deeper we find no null values, meaning all the rows and columns are filled and reduce the huge burden. After that I have encoded (to change into a form that is better usable) the Species as follows:&lt;/p&gt;

&lt;ol&gt;
    &lt;li&gt;Iris Setosa&lt;/li&gt;
    &lt;li&gt;Iris Virginica&lt;/li&gt;
    &lt;li&gt;Iris Virginica&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The code for the same is given below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;onehot = []

for i in data['Species']:
    if (i == 'setosa'):
        onehot.append(1)
    elif (i == 'virginica'):
        onehot.append(3)
    else:
        onehot.append(2)
data['Species'] = onehot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Visualizations
&lt;/h2&gt;

&lt;p&gt;I have implemented it understandably but it could be implemented in many ways at your convenience. Then now it's time to visualize the data using Matplotlib and Seaborn.&lt;/p&gt;

&lt;p&gt;Using the 'countplot' function I have plotted the count of each Species which is found to be 50 each (same as the statement mentioned at the beginning). Code to implement the same is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sns.countplot(data['Species'])
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1pvkckotpuphdvjn0iks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1pvkckotpuphdvjn0iks.png" alt="CountPlot Output" width="382" height="262"&gt;&lt;/a&gt;&lt;br&gt;
Ref: CountPlot Output&lt;/p&gt;

&lt;p&gt;Now there is something called a FacetGrid plot where we can plot dataset onto multiple axes arrayed in a grid of rows and columns that corresponds to levels in the dataset. Code to implement the same is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sns.FacetGrid(data, hue ="Species",height = 6).map(plt.scatter, 'Sepal.Length', 'Petal.Length').add_legend()
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntn7yvq6venmj2l6ruzs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntn7yvq6venmj2l6ruzs.png" alt="FacetGrid Output" width="472" height="424"&gt;&lt;/a&gt;&lt;br&gt;
Ref: FacetGrid Output&lt;/p&gt;

&lt;p&gt;There is one more interesting plot called a "pairplot" where we plot many scatter plots with the data available in the total number of numerical columns. For better understanding let's consider our dataset which has 4 numerical columns. So there would be 4 * 4 possible scatter plots available. Sounds interesting right it will look even better while we visualize it.&lt;/p&gt;

&lt;p&gt;Code to implement the same (small code, big difference):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sns.pairplot(data, hue="Species")
plt.show()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9u04slgebg7me29gwg5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy9u04slgebg7me29gwg5.png" alt="Alt Text" width="763" height="709"&gt;&lt;/a&gt;&lt;br&gt;
Ref: PairPlot Output having 16 different Graphs&lt;/p&gt;

&lt;p&gt;Hope it's understandable after visualizing it though it looks messy while understanding it. Now it's time to build our model. Now let's move on to the model building.&lt;/p&gt;
&lt;h2&gt;
  
  
  Model Building:
&lt;/h2&gt;

&lt;p&gt;It is considered one of the most important parts of any ML Workflow. I have addressed in this blog one classification algorithm i.e. Logistic Regression. But the code contains its implementation using KNN and Decision Tree Classifier as well.&lt;/p&gt;

&lt;p&gt;Before directly building the model we have to separate the independent and dependent variables. In our case, the Independent variables are columns Length and Width of Sepal and Width. We can easily separate it using the "iloc" function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X = data.iloc[ : , : 4]
y = pd.DataFrame(data['Species'])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we have to split the data for Training and Testing. Now a doubt arise in what ratio are we supposed to split to get optimal results. It's actually a trial and error method but most often it is preferred to have a 70:30 or 75:25 split as they work in most cases. In our case, I have used a 75:25. There is no hard and fast rule. You can try different split sizes and get better results as well.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After successfully splitting the train and test data we have to fit the train data on the Logistic Regression model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;model = LogisticRegression()
model.fit(X_train, y_train)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hurrah!! The model has been successfully trained now we have to actually test the working of the model where I provide some random values as input and predict the output provided by the model.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X_new = np.array([[5, 2.9, 1, 0.2]])
print("X_new.shape: {}".format(X_new.shape))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you feel that the output for the set of input values provided is perfect then Congratulations on that note, you have successfully build your model. Now we come to the end of this blog and I hope it was very much useful and has given clear insights on how to process with the IRIS Dataset.&lt;/p&gt;

&lt;p&gt;This is the &lt;a href="https://github.com/srini047/htf-blog-code" rel="noopener noreferrer"&gt;GitHubLink&lt;/a&gt; for the code used in the blog.&lt;/p&gt;

&lt;p&gt;Please feel to post your views and comments on how to improve and if I have had some Typos in the Blog as well.&lt;/p&gt;

&lt;p&gt;Feel free to connect with me through &lt;a href="https://www.linkedin.com/in/sriniketh-jayasendil-27914a198/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Happy Learning and Blogging!!!&lt;/p&gt;

&lt;h1&gt;
  
  
  HackThisFall 2.0
&lt;/h1&gt;

&lt;p&gt;Hey folks,&lt;/p&gt;

&lt;p&gt;I am thrilled to inform you that I have been accepted as a “Hackathon Evangelist” for Hack This Fall 2.0!🎉&lt;/p&gt;

&lt;p&gt;Super excited to bring a change and contribute to the hacker community in a meaningful way!🚀&lt;/p&gt;

&lt;p&gt;Do &lt;a href="https://hackthisfall.tech/" rel="noopener noreferrer"&gt;Register&lt;/a&gt; for Hack This Fall 2.0 here🔗: &lt;/p&gt;

&lt;p&gt;Register in the hackathon by heading over to the link given above and enter the special code "HTFHE067" to earn some exclusive participation goodies along!!&lt;/p&gt;

&lt;h1&gt;
  
  
  InnovateForGood #HackathonEvangelist #Hackathon
&lt;/h1&gt;

</description>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>python</category>
      <category>vscode</category>
    </item>
  </channel>
</rss>
