<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Emanuele Strazzullo</title>
    <description>The latest articles on DEV Community by Emanuele Strazzullo (@emanuelestrazzullo).</description>
    <link>https://dev.to/emanuelestrazzullo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/emanuelestrazzullo"/>
    <language>en</language>
    <item>
      <title>Building a Browser-Based RAG System with WebGPU</title>
      <dc:creator>Emanuele Strazzullo</dc:creator>
      <pubDate>Fri, 07 Nov 2025 13:20:25 +0000</pubDate>
      <link>https://dev.to/emanuelestrazzullo/building-a-browser-based-rag-system-with-webgpu-h2n</link>
      <guid>https://dev.to/emanuelestrazzullo/building-a-browser-based-rag-system-with-webgpu-h2n</guid>
      <description>&lt;p&gt;I built a proof-of-concept that lets you chat with PDF documents using AI models running entirely in your browser via WebGPU. No backend, no API keys, complete privacy.&lt;/p&gt;

&lt;p&gt;🔗 &lt;strong&gt;Demo:&lt;/strong&gt; &lt;a href="https://webpizza-ai-poc.vercel.app/" rel="noopener noreferrer"&gt;https://webpizza-ai-poc.vercel.app/&lt;/a&gt;&lt;br&gt;
📦 &lt;strong&gt;Code:&lt;/strong&gt; &lt;a href="https://github.com/stramanu/webpizza-ai-poc" rel="noopener noreferrer"&gt;https://github.com/stramanu/webpizza-ai-poc&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Why?
&lt;/h2&gt;

&lt;p&gt;I've been following the progress of WebGPU and WebLLM, and I was curious: &lt;strong&gt;Can we run a full RAG pipeline in the browser?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;RAG (Retrieval-Augmented Generation) typically requires:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A vector database&lt;/li&gt;
&lt;li&gt;An embedding model&lt;/li&gt;
&lt;li&gt;A language model&lt;/li&gt;
&lt;li&gt;Orchestration logic&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Turns out, modern browsers can handle all of this!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important note:&lt;/strong&gt; This is a proof-of-concept focused on exploring the fundamental principle of client-side RAG, not an example of production-ready code or best practices. The goal was experimentation with WebGPU and LLMs in the browser, so expect rough edges and architectural shortcuts.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; Angular 20 (standalone components, zoneless)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM:&lt;/strong&gt; WebLLM v0.2.79 + WeInfer (optimized fork)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embeddings:&lt;/strong&gt; Transformers.js (all-MiniLM-L6-v2)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector Store:&lt;/strong&gt; IndexedDB with cosine similarity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PDF Parser:&lt;/strong&gt; PDF.js&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deployment:&lt;/strong&gt; Vercel&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. Model Loading
&lt;/h3&gt;

&lt;p&gt;WebLLM downloads pre-compiled MLC models (Phi-3, Llama, Mistral). First load is slow (1-4GB), but they're cached in the browser.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;initialize&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Document Ingestion
&lt;/h3&gt;

&lt;p&gt;Upload a PDF → Parse with PDF.js → Chunk into ~500 char pieces → Embed each chunk → Store in IndexedDB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;chunks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;parser&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parseFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;file&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt; &lt;span class="k"&gt;of&lt;/span&gt; &lt;span class="nx"&gt;chunks&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;embedding&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;embedder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;embed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vectorStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addChunk&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;embedding&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Query Processing
&lt;/h3&gt;

&lt;p&gt;User asks question → Embed query → Similarity search in IndexedDB → Get top-k chunks → Feed to LLM with context.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;queryEmbedding&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;embedder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;embed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;question&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;relevantChunks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vectorStore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;queryEmbedding&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;relevantChunks&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;text&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s1"&gt;n&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;question&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Challenges
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Cross-Origin Isolation
&lt;/h3&gt;

&lt;p&gt;WebGPU requires &lt;code&gt;SharedArrayBuffer&lt;/code&gt;, which needs these headers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Cross-Origin-Embedder-Policy: require-corp
Cross-Origin-Opener-Policy: same-origin
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Vercel makes this easy with &lt;code&gt;vercel.json&lt;/code&gt;, but it breaks if you have external resources without CORS.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Memory Management
&lt;/h3&gt;

&lt;p&gt;Browsers aren't designed for 4GB models. I had to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clear vector store before loading new documents&lt;/li&gt;
&lt;li&gt;Implement proper cleanup for embeddings&lt;/li&gt;
&lt;li&gt;Handle model caching effectively&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. WebGPU Compatibility
&lt;/h3&gt;

&lt;p&gt;Not all browsers support WebGPU yet. Fallback to WebAssembly works, but it's significantly slower. Added detection logic to guide users to compatible browsers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Implement proper vector indexing&lt;/strong&gt; - Currently brute force cosine similarity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add model quantization options&lt;/strong&gt; - Let users choose speed vs quality&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better chunking strategies&lt;/strong&gt; - Currently just splitting at 500 chars&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Streaming for large documents&lt;/strong&gt; - Don't embed everything at once&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Support for multiple document formats&lt;/strong&gt; - Not just PDFs&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Privacy Win
&lt;/h2&gt;

&lt;p&gt;One unexpected benefit: &lt;strong&gt;Complete privacy by default.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your documents never leave your device. No API calls, no server uploads, no tracking. Everything happens in your browser.&lt;/p&gt;

&lt;p&gt;This makes it useful for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sensitive documents (legal, medical, personal)&lt;/li&gt;
&lt;li&gt;Offline environments&lt;/li&gt;
&lt;li&gt;Privacy-conscious users&lt;/li&gt;
&lt;li&gt;Demos without infrastructure costs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;🔗 &lt;strong&gt;Live Demo:&lt;/strong&gt; &lt;a href="https://webpizza-ai-poc.vercel.app/" rel="noopener noreferrer"&gt;https://webpizza-ai-poc.vercel.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Requirements:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chrome/Edge 113+ (WebGPU support)&lt;/li&gt;
&lt;li&gt;4GB+ RAM&lt;/li&gt;
&lt;li&gt;Modern GPU (or patience for CPU fallback)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Quick start:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/stramanu/webpizza-ai-poc
&lt;span class="nb"&gt;cd &lt;/span&gt;webpizza-ai-poc
npm &lt;span class="nb"&gt;install
&lt;/span&gt;npm start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;This is a &lt;strong&gt;proof-of-concept&lt;/strong&gt;, not production software. It has bugs, rough edges, and questionable architectural decisions.&lt;/p&gt;

&lt;p&gt;But it proves that browser-based AI is getting real. WebGPU + WebAssembly + modern JS frameworks = surprisingly capable local inference.&lt;/p&gt;

&lt;p&gt;What would you build with this stack?&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Inspired by:&lt;/strong&gt; &lt;a href="https://github.com/datapizza-labs/datapizza-ai" rel="noopener noreferrer"&gt;DataPizza AI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Questions? Issues?&lt;/strong&gt; Drop a comment or open an issue on GitHub!&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>llm</category>
      <category>rag</category>
    </item>
    <item>
      <title>How I built a CLI tool to simplify my daily terminal workflow</title>
      <dc:creator>Emanuele Strazzullo</dc:creator>
      <pubDate>Tue, 21 Oct 2025 15:01:28 +0000</pubDate>
      <link>https://dev.to/emanuelestrazzullo/how-i-built-a-cli-tool-to-simplify-my-daily-terminal-workflow-1m28</link>
      <guid>https://dev.to/emanuelestrazzullo/how-i-built-a-cli-tool-to-simplify-my-daily-terminal-workflow-1m28</guid>
      <description>&lt;p&gt;As a developer, I live in the terminal. Every day I end up typing the same long commands: boot servers, build artifacts, sync repos, SSH into boxes, run one-off scripts. It’s fine… until it isn’t. I’d tweak flags, forget exact arguments, or copy–paste broken snippets.&lt;/p&gt;

&lt;p&gt;I wanted something tiny, fast, and mine: a way to name the commands I use most and run them from anywhere with muscle‑memory simplicity.&lt;/p&gt;

&lt;p&gt;That’s how mcl — My Command Line — was born.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧠 The idea behind mcl
&lt;/h2&gt;

&lt;p&gt;The “aha” moment came after repeating the same flows across projects. I didn’t want another framework. I wanted a thin layer over the shell where I could:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Describe commands once&lt;/li&gt;
&lt;li&gt;Reuse them with arguments and variables&lt;/li&gt;
&lt;li&gt;Keep them organized across projects&lt;/li&gt;
&lt;li&gt;See what I have at a glance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With mcl you write small JSON recipes. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"say-hello"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"echo 'Hello!'"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"start-dev"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npm run dev"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you just run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mcl say-hello
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No DSL. No ceremony. Just your shell, with a nicer memory.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚙️ How it works (local vs global, args, vars)
&lt;/h2&gt;

&lt;p&gt;mcl reads from two places and merges:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local project config: &lt;code&gt;./mcl.json&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Global config: &lt;code&gt;~/.mcl/global-mcl.json&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Local overrides global, so you can keep common shortcuts globally (e.g., &lt;code&gt;open-docs&lt;/code&gt;, &lt;code&gt;deploy&lt;/code&gt;) and specialize per project.&lt;/p&gt;

&lt;p&gt;It also supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Positional placeholders: &lt;code&gt;$1&lt;/code&gt;, &lt;code&gt;$2&lt;/code&gt;, … and optional ones like &lt;code&gt;?$1&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Named vars: &lt;code&gt;$project&lt;/code&gt;, &lt;code&gt;$version&lt;/code&gt; via a &lt;code&gt;vars&lt;/code&gt; object&lt;/li&gt;
&lt;li&gt;Nested flows: scripts can be nested objects, e.g. &lt;code&gt;example.date.utc&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Dry-run mode: see exactly what would run before executing&lt;/li&gt;
&lt;li&gt;Env sharing: &lt;code&gt;--share-vars&lt;/code&gt; exports config vars and args to subprocesses&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A richer example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"vars"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"project"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mcl"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"0.2.0"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"example"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"hello"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"echo Hello, $1!"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"date"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"utc"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"date -u"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"build"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"win"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GOOS=windows GOARCH=amd64 wails build"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run them like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mcl &lt;span class="nt"&gt;--dry-run&lt;/span&gt; example &lt;span class="nb"&gt;date &lt;/span&gt;utc
mcl build win
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And if you run plain &lt;code&gt;mcl&lt;/code&gt;, it prints a handy list of available scripts (local first, then global).&lt;/p&gt;

&lt;p&gt;Under the hood: it’s Python + Click, with strict type hints, pytest, mypy, and black. The CLI resolves your script path, applies substitutions, and executes steps in order.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 What’s next
&lt;/h2&gt;

&lt;p&gt;Here’s what I’m exploring next to make mcl smarter, safer, and more shareable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI‑assisted recipe generator: analyze your repo (Dockerfile, package.json, pyproject, Makefile, CI) and propose a ready‑to‑use &lt;code&gt;mcl.json&lt;/code&gt; with common tasks.&lt;/li&gt;
&lt;li&gt;Natural language → command recipes: “build and publish the docker image” becomes a reproducible script (with a dry‑run preview first).&lt;/li&gt;
&lt;li&gt;Smart discovery and autocomplete: fuzzy search across local/global scripts, inline arg hints, and quick previews of what will run.&lt;/li&gt;
&lt;li&gt;Safety checks: secret/unsafe flag linting and an optional “explain this command” powered by an LLM before you execute.&lt;/li&gt;
&lt;li&gt;Team sharing &amp;amp; sync: share curated script packs across repos (and optionally sync via a gist or a small registry).&lt;/li&gt;
&lt;li&gt;Plugin hooks &amp;amp; marketplace: pre/post hooks, custom resolvers, and reusable packs (e.g., Docker, Git, Node, Python).&lt;/li&gt;
&lt;li&gt;Config schema validation (Pydantic) and optional YAML support.&lt;/li&gt;
&lt;li&gt;Multi‑platform test matrix (tox) to keep behavior consistent across OSes and shells.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📦 Try it out
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/stramanu/mcl-tool" rel="noopener noreferrer"&gt;https://github.com/stramanu/mcl-tool&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;PyPI: &lt;a href="https://pypi.org/project/mcl-tool/" rel="noopener noreferrer"&gt;https://pypi.org/project/mcl-tool/&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Quick start:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# install (recommended)&lt;/span&gt;
pipx &lt;span class="nb"&gt;install &lt;/span&gt;mcl-tool

&lt;span class="c"&gt;# or in a virtual environment&lt;/span&gt;
pip &lt;span class="nb"&gt;install &lt;/span&gt;mcl-tool

&lt;span class="c"&gt;# initialize a local config&lt;/span&gt;
mcl init

&lt;span class="c"&gt;# run your scripts&lt;/span&gt;
mcl &amp;lt;script&amp;gt; &lt;span class="o"&gt;[&lt;/span&gt;args...]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If this sounds useful, a ⭐️ on GitHub helps a ton — and I’d love to hear your feedback or feature ideas. Let’s make the terminal a bit more ergonomic together.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>cli</category>
      <category>opensource</category>
      <category>python</category>
    </item>
  </channel>
</rss>
