<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nishanth Abimanyu</title>
    <description>The latest articles on DEV Community by Nishanth Abimanyu (@nishanth_abimanyu_001).</description>
    <link>https://dev.to/nishanth_abimanyu_001</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nishanth_abimanyu_001"/>
    <language>en</language>
    <item>
      <title>I Stopped Rebuilding the Telescope and Built a Lens Instead.....</title>
      <dc:creator>Nishanth Abimanyu</dc:creator>
      <pubDate>Wed, 04 Mar 2026 10:01:38 +0000</pubDate>
      <link>https://dev.to/nishanth_abimanyu_001/i-stopped-rebuilding-the-telescope-and-built-a-lens-instead-4kme</link>
      <guid>https://dev.to/nishanth_abimanyu_001/i-stopped-rebuilding-the-telescope-and-built-a-lens-instead-4kme</guid>
      <description>&lt;p&gt;In my previous work translating the &lt;em&gt;Surya Siddhanta&lt;/em&gt; (an ancient Indian astronomical text), I spent months doing something I eventually realized was a mistake. I was manually calculating planetary positions to verify Sanskrit verses against modern ephemerides. It was a tedious, repetitive process of building bespoke simulations for a single cultural context.&lt;/p&gt;

&lt;p&gt;When I moved on to my &lt;strong&gt;Final Year Project&lt;/strong&gt; focusing on Mayan and Chinese astronomy, I almost fell into the same trap. I started coding a new simulation for the Mayan calendar. Then, I planned to write another one for the Han Dynasty.&lt;/p&gt;

&lt;p&gt;I realized I was rebuilding the "telescope" every time I wanted to look at a different star.&lt;/p&gt;

&lt;p&gt;That’s when I pivoted to build &lt;strong&gt;Sky Culture MCP&lt;/strong&gt;: a computational microservice that abstracts the orbital mechanics, allowing AI agents to focus on cultural interpretation.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Lens.....
&lt;/h2&gt;

&lt;p&gt;The core philosophy of this project is simple: &lt;strong&gt;Change the lens, not the telescope.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this architecture, the "Telescope" is the physics engine—the complex math required to calculate where a planet was 2,000 years ago. The "Lens" is the specific cultural context (names, dates, and significance).&lt;/p&gt;

&lt;p&gt;I built this using the &lt;strong&gt;Model Context Protocol (MCP)&lt;/strong&gt; to serve as a bridge between high-precision astronomical data and Large Language Models (LLMs) like Claude.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Tech Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Protocol:&lt;/strong&gt; Model Context Protocol (MCP)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ephemeris:&lt;/strong&gt; NASA JPL DE421 (High precision from 1900 BC to 2050 AD)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Physics Engine:&lt;/strong&gt; Skyfield (Vector astrometry)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time Standard:&lt;/strong&gt; Barycentric Dynamical Time (TDB)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How it Works
&lt;/h2&gt;

&lt;p&gt;Instead of asking an LLM to "hallucinate" where Mars was during a specific Mayan Long Count date, the Agent queries my MCP server. The server handles the vector math and returns precise J2000 coordinates.&lt;/p&gt;

&lt;p&gt;Here is the primary function signature exposed to the AI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nf"&gt;convert_culture_to_coordinates&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;culture_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;object_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;date_str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It takes a cultural input and returns a scientific output. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input:&lt;/strong&gt; Culture: &lt;code&gt;mayan&lt;/code&gt;, Object: &lt;code&gt;chak_ek&lt;/code&gt;, Date: &lt;code&gt;M:9,16,4,10,8&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Output:&lt;/strong&gt; Julian Day Number + Right Ascension &amp;amp; Declination&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Data Structure
&lt;/h3&gt;

&lt;p&gt;The "Lenses" are defined in a JSON library. Here is a snippet of how I map native names to modern identifiers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"chinese_han"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Chinese (Han)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"objects"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"yinghuo"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Yinghuo (Fire Star)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"modern_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"mars"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mayan"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Mayan"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"objects"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"chak_ek"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Chak Ek (Great Star)"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"modern_id"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"venus"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Integration with Claude Desktop
&lt;/h2&gt;

&lt;p&gt;One of the coolest parts of this project was integrating it directly into my daily workflow. By running the engine as a Docker container, I can add it to my &lt;code&gt;claude_desktop_config.json&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"sky-culture-lite"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"docker"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"args"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"run"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"-i"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"--rm"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"yourname/sky-culture-lite"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, when I'm chatting with Claude about a historical text, it has a tool it can call to get ground-truth astronomical data without me needing to open a separate simulation software.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;This project represents a shift in how we use AI for history and science. We shouldn't rely on LLMs to do math (they are bad at it). We should rely on them for &lt;strong&gt;interpretation&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;By offloading the physics to a deterministic engine (Skyfield) and using the MCP to bridge the gap, I've created a system where the AI can "put on a Mayan lens" or a "Han Dynasty lens" to see the sky exactly as they did—leaving the Agent free to focus on the poetry, the history, and the meaning behind the stars.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Thanks for reading! If you're interested in Archaeoastronomy or MCP, drop a comment below.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>This GitHub Repo Is A Literal Money Printer For n8n Developers</title>
      <dc:creator>Nishanth Abimanyu</dc:creator>
      <pubDate>Tue, 30 Dec 2025 09:10:16 +0000</pubDate>
      <link>https://dev.to/nishanth_abimanyu_001/this-github-repo-is-a-literal-money-printer-for-n8n-developers-2ppd</link>
      <guid>https://dev.to/nishanth_abimanyu_001/this-github-repo-is-a-literal-money-printer-for-n8n-developers-2ppd</guid>
      <description>&lt;p&gt;If you are an n8n developer, you know the reality: &lt;strong&gt;Time is money.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every hour you spend debugging a broken node or figuring out an authentication flow is an hour you aren't billing. To truly scale your freelance income or your agency, you need speed. You need to deliver high-quality automation in a fraction of the time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/nishanthabimanyu/Automation-WorkBook.git" rel="noopener noreferrer"&gt;This GitHub repository that is the secret weapon for doing exactly that. It’s called the &lt;strong&gt;Automation WorkBook&lt;/strong&gt;, and it is designed to &lt;strong&gt;10x your workflow development&lt;/strong&gt; by giving you a library of verified, sellable solutions.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How This Repo "10x-es" Your Development
&lt;/h2&gt;

&lt;p&gt;Most developers start every project with a blank canvas. That is the slow lane.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Automation WorkBook&lt;/strong&gt; allows you to skip the first 90% of the work. Instead of building from scratch, you pull pre-built, industry-standard workflows that handle the heavy lifting for you.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Need a complex RAG Chatbot?&lt;/strong&gt; Don't build the vector store logic from zero. Import the "RAG Chatbot for Company Documents" workflow.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Need to scrape data?&lt;/strong&gt; Stop fighting with selectors. Use the "Ultimate Scraper Workflow".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By using these templates, you move straight to &lt;strong&gt;customization and delivery&lt;/strong&gt;. You deliver in hours what takes other developers days.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Money Angle: Sell "Verified" Solutions
&lt;/h2&gt;

&lt;p&gt;The problem with most n8n repositories is that they are full of "junk" code—scraped from community forums, untested, and often broken. You can't sell that to a client.&lt;/p&gt;

&lt;p&gt;This repository is different because it is &lt;strong&gt;Problem-First&lt;/strong&gt;, not Tool-First. It was built by analyzing &lt;strong&gt;Upwork job postings&lt;/strong&gt; to see what clients are actually paying for.&lt;/p&gt;

&lt;p&gt;Here is how you turn this repo into income:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Sell Lead Generation:&lt;/strong&gt; Clients pay huge retainers for fresh leads. You can deploy the &lt;strong&gt;LinkedIn &amp;amp; Maps Scrapers&lt;/strong&gt; from the "Sales &amp;amp; Marketing" folder immediately.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sell AI Support Agents:&lt;/strong&gt; Use the verified "Intelligent Chatbot" workflows to build customer support systems for e-commerce stores.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sell Operations Automation:&lt;/strong&gt; Offer "HR Resume Parsing" or "Invoice Processing" services using the specialized workflows in the Admin and Finance folders.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Proof It Actually Works: The "Personal Notebook"
&lt;/h2&gt;

&lt;p&gt;The biggest confidence booster in this repo is the &lt;strong&gt;Personal Notebook&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Unlike generic repos, the author here documents &lt;strong&gt;Case Studies&lt;/strong&gt; of actual high-value projects. They break down the &lt;strong&gt;Strategic Planning&lt;/strong&gt;, the &lt;strong&gt;Problem Analysis&lt;/strong&gt;, and the architecture.&lt;/p&gt;

&lt;p&gt;This proves that these aren't just random nodes connected together—they are battle-tested solutions that have solved real business problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;If you want to earn money with n8n, you need to stop acting like a coder and start acting like a business consultant.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Automation WorkBook&lt;/strong&gt; gives you the inventory you need to do that. It saves you time, it gives you sellable products, and it ensures the code you deliver is high-quality.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>I Got My Google Account Suspended for This. Now It's an Official VS Code Feature.</title>
      <dc:creator>Nishanth Abimanyu</dc:creator>
      <pubDate>Mon, 17 Nov 2025 14:25:34 +0000</pubDate>
      <link>https://dev.to/nishanth_abimanyu_001/i-got-my-google-account-suspended-for-this-now-its-an-official-vs-code-feature-3b12</link>
      <guid>https://dev.to/nishanth_abimanyu_001/i-got-my-google-account-suspended-for-this-now-its-an-official-vs-code-feature-3b12</guid>
      <description>&lt;p&gt;A few years ago, during my final year project, I got my Google account suspended for months.&lt;/p&gt;

&lt;p&gt;The crime? I was living in the "two-tab hell" every ML developer knows: my Git repo and project in VS Code, and my GPU-powered notebook in a Chrome tab. The workflow was killing me. I'd &lt;code&gt;!git pull&lt;/code&gt; in a cell, manually upload datasets, and &lt;code&gt;scp&lt;/code&gt; my model checkpoints back and forth. It was slow and painful.&lt;/p&gt;

&lt;p&gt;So, I found a hack. A script that used &lt;code&gt;colab-ssh&lt;/code&gt; and &lt;code&gt;ngrok&lt;/code&gt; (or Cloudflare) to create a reverse tunnel from the Colab VM back to my local machine. I could finally plug my local VS Code &lt;em&gt;directly&lt;/em&gt; into the Colab runtime. It was magic... right up until Google's security systems (rightfully) saw this sketchy, unauthenticated backdoor and locked my account for violating the Terms of Service.&lt;/p&gt;

&lt;p&gt;Fast forward to now. I see the official announcement: &lt;strong&gt;Google Colab is Coming to VS Code.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My first thought wasn't "finally!" It was, "Wait... what?"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttglc52y4xlnhw2k3367.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fttglc52y4xlnhw2k3367.png" alt=" " width="800" height="519"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Did they just whitelist the exact &lt;code&gt;ngrok&lt;/code&gt; hack that got me suspended? Or did they build something else? I had to know. As an open-source contributor, I couldn't just &lt;em&gt;use&lt;/em&gt; it; I had to pop the hood and see the engine. I cloned the &lt;code&gt;colab-vscode&lt;/code&gt; repo to find out.&lt;/p&gt;

&lt;p&gt;Here’s the technical decode of what I found.&lt;/p&gt;

&lt;h2&gt;
  
  
  The first thing I found is that the extension
&lt;/h2&gt;

&lt;p&gt;Is a brilliant &lt;strong&gt;Adapter&lt;/strong&gt;. It doesn't re-implement a notebook interface it’s a "plugin for a plugin."&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;File:&lt;/strong&gt; &lt;code&gt;package.json&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt; It lists &lt;code&gt;"ms-toolsai.jupyter"&lt;/code&gt; as an &lt;code&gt;extensionDependency&lt;/code&gt;. This means the Colab extension is just a &lt;em&gt;data source&lt;/em&gt; for the official Microsoft Jupyter extension.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;File:&lt;/strong&gt; &lt;code&gt;src/jupyter/provider.ts&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;What it does:&lt;/strong&gt; The &lt;code&gt;ColabJupyterServerProvider&lt;/code&gt; class is the core of the adapter. It implements the &lt;code&gt;JupyterServerProvider&lt;/code&gt; interface that the Microsoft extension expects. When you click "Select Kernel," the Jupyter extension asks this class, "Hey, got any servers?" and this class is responsible for replying.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a key insight: they didn't build a new car; they just built a universal adapter that lets you plug Colab's proprietary engine into the "Jupyter" car we all already drive.&lt;/p&gt;




&lt;h2&gt;
  
  
  The "Front Door"-- How They Solved the &lt;code&gt;ngrok&lt;/code&gt; Problem
&lt;/h2&gt;

&lt;p&gt;This was the part I cared about. How do you connect without getting banned?&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;My Hack:&lt;/strong&gt; I used &lt;code&gt;ngrok&lt;/code&gt; to create a &lt;strong&gt;reverse proxy&lt;/strong&gt;. This pokes a hole &lt;em&gt;out&lt;/em&gt; of Google's secure VM to a public URL. This is the "backdoor" that got me banned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Official Way:&lt;/strong&gt; I found the answer in &lt;code&gt;src/auth/flows/loopback.ts&lt;/code&gt;. They built the complete opposite: a secure, &lt;strong&gt;local-only loopback flow&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you click "Sign In," it doesn't run a tunnel script.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; It spins up a tiny, temporary &lt;code&gt;http.createServer()&lt;/code&gt; on &lt;code&gt;127.0.0.1&lt;/code&gt; at a random port.&lt;/li&gt;
&lt;li&gt; It opens your browser to the real Google OAuth page, but it passes a &lt;code&gt;redirect_uri&lt;/code&gt; pointing to &lt;code&gt;http://127.0.0.1:[your_port]&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; When you sign in, Google redirects your browser &lt;em&gt;back to your own machine&lt;/em&gt;.&lt;/li&gt;
&lt;li&gt; The &lt;code&gt;Handler&lt;/code&gt; in &lt;code&gt;loopback.ts&lt;/code&gt; catches this &lt;em&gt;one&lt;/em&gt; request, grabs the authorization &lt;code&gt;code&lt;/code&gt; from the URL, and immediately shuts the server down.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is why it's secure and allowed. It's not a public tunnel. It's a "key exchange" that proves you are the physical user on that &lt;em&gt;exact&lt;/em&gt; machine. The &lt;code&gt;refreshToken&lt;/code&gt; it gets is then stored securely in VS Code's native &lt;code&gt;SecretStorage&lt;/code&gt; via &lt;code&gt;src/auth/storage.ts&lt;/code&gt;.&lt;/p&gt;




&lt;h2&gt;
  
  
  The "Connection"-It's Not SSH, It's an API
&lt;/h2&gt;

&lt;p&gt;So, my hack used SSH. How do they connect?&lt;/p&gt;

&lt;p&gt;The answer is in &lt;code&gt;src/colab/client.ts&lt;/code&gt;. The extension doesn't use SSH at all. It's all HTTPS.&lt;/p&gt;

&lt;p&gt;When you click "New Colab Server," the &lt;code&gt;AssignmentManager&lt;/code&gt; (&lt;code&gt;src/jupyter/assignments.ts&lt;/code&gt;) calls the &lt;code&gt;ColabClient&lt;/code&gt;. This client makes an authenticated &lt;code&gt;POST&lt;/code&gt; request to an internal Google API: &lt;code&gt;/tun/m/assign&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This is the real "front door." Google's backend verifies your &lt;code&gt;accessToken&lt;/code&gt; (your "ID badge" from the auth step) and sends back a JSON object. The &lt;code&gt;src/colab/api.ts&lt;/code&gt; file defines the schema for this, and it's the holy grail: &lt;strong&gt;&lt;code&gt;RuntimeProxyInfo&lt;/code&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This &lt;code&gt;RuntimeProxyInfo&lt;/code&gt; object is the "key to the runtime" I was missing all those years ago. It contains:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; A unique, temporary &lt;code&gt;url&lt;/code&gt; (like &lt;code&gt;https_a-b-c.colab.googleusercontent.com&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt; A separate, temporary &lt;code&gt;token&lt;/code&gt; for that &lt;em&gt;specific&lt;/em&gt; runtime.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;When the Jupyter extension tries to run a cell, the &lt;code&gt;colabProxyFetch&lt;/code&gt; function (&lt;code&gt;src/jupyter/assignments.ts&lt;/code&gt;) intercepts the request and injects this new token into a custom HTTP header: &lt;code&gt;X-Colab-Runtime-Proxy-Token&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We're not hacking into a VM's Jupyter process. We're an authenticated guest talking to a first-party, managed API proxy.&lt;/p&gt;




&lt;h2&gt;
  
  
  The "Aha!" Bonus Finds
&lt;/h2&gt;

&lt;p&gt;I kept digging and found two more gems that show the level of thought that went into this.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The "Keep-Alive" Hack is Gone.
&lt;/h3&gt;

&lt;p&gt;I used to run &lt;code&gt;while True: time.sleep(60)&lt;/code&gt; in a cell. It was dumb. In &lt;code&gt;src/colab/keep-alive.ts&lt;/code&gt;, they have a &lt;code&gt;ServerKeepAliveController&lt;/code&gt;. It runs a background task every 5 minutes that just calls &lt;code&gt;ColabClient.sendKeepAlive&lt;/code&gt;. This is a simple API ping to &lt;code&gt;/tun/m/.../keep-alive/&lt;/code&gt; that resets the idle timer on Google's backend. Clean.&lt;/p&gt;

&lt;p&gt;If you &lt;em&gt;are&lt;/em&gt; idle, it even pops the "Server is idle" notification in VS Code, and if you click "Cancel" (as in, "cancel the disconnect"), it keeps the pings going.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The "LSP Fix" is Brilliant.
&lt;/h3&gt;

&lt;p&gt;This is my favorite part. The Python Language Server (Pylance) has no idea what &lt;code&gt;!pip install&lt;/code&gt; or &lt;code&gt;%matplotlib&lt;/code&gt; means, so it draws red squiggles under them.&lt;/p&gt;

&lt;p&gt;So, did the Colab team write their own Language Server? Nope.&lt;/p&gt;

&lt;p&gt;In &lt;code&gt;src/lsp/middleware.ts&lt;/code&gt;, they register "LSP middleware." This is code that sits between the Python LSP and your editor. The function &lt;code&gt;filterNonIPythonDiagnostics&lt;/code&gt; intercepts &lt;em&gt;all&lt;/em&gt; diagnostics (errors) &lt;em&gt;before&lt;/em&gt; they're rendered.&lt;/p&gt;

&lt;p&gt;It reads the text of the line with the error. If that line starts with &lt;code&gt;!&lt;/code&gt; or &lt;code&gt;%&lt;/code&gt;, it just... throws the error away.&lt;/p&gt;

&lt;p&gt;It's a simple, perfect, and invisible fix.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Verdict
&lt;/h2&gt;

&lt;p&gt;So, no. Google didn't just &lt;em&gt;allow&lt;/em&gt; the tunnel hack. They built the real, secure, and robust system we were all trying to build ourselves.&lt;/p&gt;

&lt;p&gt;I was banned for trying to pick the lock. The official team built a proper front door with an authenticated doorman and gave us all a key. It's the exact workflow I always wanted, and I don't even have to risk my account for it. Well done, team.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>This is a submission for the Google AI Studio Multimodal Challenge</title>
      <dc:creator>Nishanth Abimanyu</dc:creator>
      <pubDate>Sun, 14 Sep 2025 14:45:43 +0000</pubDate>
      <link>https://dev.to/nishanth_abimanyu_001/this-is-a-submission-for-the-google-ai-studio-multimodal-challenge-mkc</link>
      <guid>https://dev.to/nishanth_abimanyu_001/this-is-a-submission-for-the-google-ai-studio-multimodal-challenge-mkc</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtopvqgwxaz4v5emizg6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtopvqgwxaz4v5emizg6.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Symmetria - AI Rangoli Architect&lt;/strong&gt; represents the evolution of my  Hackathon project, now transformed through Google AI Studio's multimodal capabilities into a sophisticated platform that celebrates the rich tradition of Indian Rangoli art. This application serves as both a creative studio and educational portal, seamlessly blending ancient artistic traditions with cutting-edge AI technology to preserve, analyze, and reimagine Rangoli designs through multiple sensory experiences.&lt;/p&gt;

&lt;p&gt;The platform addresses the critical challenge of cultural preservation while making traditional art forms accessible to contemporary audiences through interactive technology. By leveraging Gemini's multimodal capabilities, Symmetria bridges generations and geographies, ensuring that this beautiful art form continues to evolve and inspire.&lt;/p&gt;

&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Live Application&lt;/strong&gt;: [&lt;a href="https://ai.studio/apps/drive/117zd5PqCPvVUbytvqudcrI15Q69SOiMa" rel="noopener noreferrer"&gt;Symmetria AI Rangoli Architect on Cloud Run&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Video Demonstration&lt;/strong&gt;: [From SIH Project to Multimodal Experience]  &lt;iframe src="https://www.youtube.com/embed/7x8opOm4ULo"&gt;
  &lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: Video demonstration included to showcase features that may use free-tier APIs during judging&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How I Used Google AI Studio
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Transforming an SIH Concept into Multimodal Reality
&lt;/h3&gt;

&lt;p&gt;My original Smart India Hackathon project focused on digital preservation of Rangoli patterns. With Google AI Studio, I've transformed it into a comprehensive multimodal experience:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multimodal Integration Strategy:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Gemini 2.5 Pro Vision&lt;/strong&gt;: Enhanced pattern analysis from basic recognition to sophisticated mathematical and cultural understanding&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Live API Capabilities&lt;/strong&gt;: Added real-time voice interaction and dynamic content generation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal Content Generation&lt;/strong&gt;: Created seamless transitions between visual, auditory, and textual experiences&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Technical Enhancements:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Upgraded from static pattern database to AI-generated designs&lt;/li&gt;
&lt;li&gt;Transformed basic analysis into deep mathematical symmetry proofs&lt;/li&gt;
&lt;li&gt;Added cultural intelligence and historical context understanding&lt;/li&gt;
&lt;li&gt;Implemented real-time voice control and interactive storytelling&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Multimodal Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;AI-Powered Creation &amp;amp; Remixing&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Text-to-Rangoli Generation&lt;/strong&gt;: Users describe designs ("peacock motifs with 4-fold symmetry") → AI generates entirely new Rangoli images&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Style Transformation&lt;/strong&gt;: Upload existing designs → remix with new materials, colors, or styles through natural language commands&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cultural Authenticity Preservation&lt;/strong&gt;: AI maintains traditional mathematical principles while enabling creative experimentation&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Deep Multimodal Analysis&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mathematical Symmetry Prover&lt;/strong&gt;: AI performs formal group theory analysis, identifying symmetry groups (D4, etc.) with step-by-step visual proofs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cultural Context Engine&lt;/strong&gt;: Analyzes designs for regional origins, festival associations, and symbolic meanings&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pattern Complexity Assessment&lt;/strong&gt;: Evaluates designs using advanced mathematical metrics rooted in traditional principles&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Artistic Interpretation &amp;amp; Storytelling&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Interactive Rangoli Tales&lt;/strong&gt;: Choose-your-own-adventure stories generated from pattern analysis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual-to-Audio Translation&lt;/strong&gt;: Converts geometric patterns into musical compositions based on symmetry and complexity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Historical Narrative Generation&lt;/strong&gt;: Creates plausible historical backgrounds and cultural stories for each design&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Interactive Learning System&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI Cultural Expert&lt;/strong&gt;: Chat interface for exploring Rangoli history, techniques, and significance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Meditative Creation&lt;/strong&gt;: Animated drawing experiences that teach traditional techniques through peaceful observation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adaptive Learning Pathways&lt;/strong&gt;: Personalized educational journeys based on user interaction patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cultural Significance &amp;amp; Impact
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Preservation Through Innovation
&lt;/h3&gt;

&lt;p&gt;Symmetria addresses the urgent need to preserve intangible cultural heritage by:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Digitizing traditional knowledge and techniques&lt;/li&gt;
&lt;li&gt;Making ancient art forms accessible to global audiences&lt;/li&gt;
&lt;li&gt;Ensuring continuity between generations through technology&lt;/li&gt;
&lt;li&gt;Celebrating regional diversity in Rangoli traditions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Educational Value
&lt;/h3&gt;

&lt;p&gt;The platform serves as an educational tool that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Teaches mathematical concepts through visual art&lt;/li&gt;
&lt;li&gt;Preserves cultural knowledge through engaging experiences&lt;/li&gt;
&lt;li&gt;Makes traditional art relevant to digital-native generations&lt;/li&gt;
&lt;li&gt;Provides accessibility through multiple learning modalities&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Technical Achievement
&lt;/h3&gt;

&lt;p&gt;From SIH project to multimodal platform, this evolution demonstrates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Successful integration of traditional knowledge with modern AI&lt;/li&gt;
&lt;li&gt;Effective use of multimodal capabilities for cultural preservation&lt;/li&gt;
&lt;li&gt;Scalable architecture for cultural heritage applications&lt;/li&gt;
&lt;li&gt;Innovative approach to intangible cultural heritage conservation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementation Highlights
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;User Experience Design:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Sophisticated dark theme with traditional Indian color accents (deep red, black, amber)&lt;/li&gt;
&lt;li&gt;Responsive design adapting from desktop sidebar to mobile bottom navigation&lt;/li&gt;
&lt;li&gt;Custom animations reflecting traditional Rangoli creation rhythms&lt;/li&gt;
&lt;li&gt;Intuitive interface balancing cultural authenticity with modern usability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical Architecture:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud-native deployment on Google Cloud Run&lt;/li&gt;
&lt;li&gt;Multimodal API integration handling image, text, and voice simultaneously&lt;/li&gt;
&lt;li&gt;Real-time processing for immediate visual and auditory feedback&lt;/li&gt;
&lt;li&gt;Scalable design supporting multiple concurrent cultural experiences&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This project represents the successful evolution of an SIH concept into a sophisticated multimodal platform, demonstrating how traditional technology projects can be transformed through Google AI Studio's advanced capabilities to serve cultural preservation and educational needs.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This submission builds upon my original Smart India Hackathon project, significantly enhanced through Google AI Studio's multimodal capabilities to create a comprehensive cultural preservation and education platform.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Prototyped : Nishanth - Bridging traditional art with modern AI technology&lt;/em&gt;&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>googleaichallenge</category>
      <category>ai</category>
      <category>gemini</category>
    </item>
    <item>
      <title>The Simple 4-Step Guide to DHCP Server Configuration for Multiple VLANs</title>
      <dc:creator>Nishanth Abimanyu</dc:creator>
      <pubDate>Tue, 09 Sep 2025 14:29:57 +0000</pubDate>
      <link>https://dev.to/nishanth_abimanyu_001/the-simple-4-step-guide-to-dhcp-server-configuration-for-multiple-vlans-2c4e</link>
      <guid>https://dev.to/nishanth_abimanyu_001/the-simple-4-step-guide-to-dhcp-server-configuration-for-multiple-vlans-2c4e</guid>
      <description>&lt;p&gt;I’ll be honest — it wasn’t easy to put everything into words. I knew the concepts in my head, but when I tried to explain them line by line, the result was confusing…..&lt;/p&gt;

&lt;p&gt;I’d write five lines of configs and then have to backtrack because I couldn’t even follow my own article a week later.&lt;/p&gt;

&lt;p&gt;That’s when I thought…&lt;br&gt;
Why not add flowcharts? If I’m struggling to explain it, chances are you’ll struggle to follow.&lt;/p&gt;

&lt;p&gt;So instead of drowning in paragraphs, I broke it down visually. I created four flowcharts, just enough so you can get a good grasp of how the network fits together without overcomplicating things&lt;/p&gt;

&lt;p&gt;How I Set the Stage….&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswf16x19lc3v62yfurf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fswf16x19lc3v62yfurf7.png" alt=" " width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first step is a blank Packet Tracer canvas.&lt;/p&gt;

&lt;p&gt;Press enter or click to view image in full size&lt;/p&gt;

&lt;p&gt;The setup includes four PCs, one 2960 switch, and one 2911 router, all connected with straight-through cables. The connections are as follows….&lt;/p&gt;

&lt;p&gt;Press enter or click to view image in full size&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xcttelzhc69o0sf1ot8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2xcttelzhc69o0sf1ot8.png" alt=" " width="800" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first green lights are your first green flags. If anything stays amber or red, don’t hope it fixes itself. It won’t. Check the cable type, port status, and speed/duplex settings. Fix it now, not later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VLAN Configuration on the Switch&lt;/strong&gt;&lt;br&gt;
Leaving all four PCs in the default VLAN is like putting Girl Friend and Your Ex-Girl Friend in the same room and then wondering why they are angry.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5uk5cann6qvuvnyy4s5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq5uk5cann6qvuvnyy4s5.png" alt=" " width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Broadcast storms aside, you lose segmentation, you lose control, and troubleshooting becomes vibes-based. So I split the network into two crisp lanes…..&lt;/p&gt;

&lt;p&gt;Switch&amp;gt; enable&lt;br&gt;
Switch# configure terminal&lt;/p&gt;

&lt;p&gt;! Create VLANs&lt;br&gt;
Switch(config)# vlan 10&lt;br&gt;
Switch(config-vlan)# name SALES&lt;br&gt;
Switch(config-vlan)# exit&lt;/p&gt;

&lt;p&gt;Switch(config)# vlan 20&lt;br&gt;
Switch(config-vlan)# name ENGINEERING&lt;br&gt;
Switch(config-vlan)# exit&lt;/p&gt;

&lt;p&gt;! Assign access ports to VLANs&lt;br&gt;
Switch(config)# interface range fastethernet 0/1-2&lt;br&gt;
Switch(config-if-range)# switchport mode access&lt;br&gt;
Switch(config-if-range)# switchport access vlan 10&lt;br&gt;
Switch(config-if-range)# exit&lt;/p&gt;

&lt;p&gt;Switch(config)# interface range fastethernet 0/3-4&lt;br&gt;
Switch(config-if-range)# switchport mode access&lt;br&gt;
Switch(config-if-range)# switchport access vlan 20&lt;br&gt;
Switch(config-if-range)# exit&lt;/p&gt;

&lt;p&gt;! Configure trunk port&lt;br&gt;
Switch(config)# interface fastethernet 0/24&lt;br&gt;
Switch(config-if)# switchport mode trunk&lt;br&gt;
Switch(config-if)# switchport trunk allowed vlan 10,20&lt;br&gt;
Switch(config-if)# end&lt;/p&gt;

&lt;p&gt;! Save configuration&lt;br&gt;
Switch# copy running-config startup-config&lt;br&gt;
Access ports are the “rooms” (Fa0/1–2 in VLAN 10, Fa0/3–4 in VLAN 20). Fa0/24 is the “corridor” — a trunk that carries tagged traffic from both rooms to the router.&lt;/p&gt;

&lt;p&gt;My flowchart here focuses on the hierarchy: create VLANs → assign access ports → make the trunk → allow only 10,20. Minimal surface area, maximum clarity.&lt;br&gt;
**&lt;br&gt;
Quick mental check I always do….**&lt;/p&gt;

&lt;p&gt;Switch# show vlan brief&lt;br&gt;
Switch# show interfaces trunk&lt;br&gt;
If a PC in Sales can ping a PC in Engineering without the router, I’ve messed up the VLANs.&lt;br&gt;
If the router sees nothing on a subinterface, the trunk isn’t tagging (or allowed VLANs are wrong).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Router-on-a-Stick — ( Think it like the translator in the middle )&lt;/strong&gt;&lt;br&gt;
The router is where the two worlds meet. Physically it’s one link, logically it’s multiple lanes. That’s why subinterfaces exist. I picture the router like a bouncer with two counters….&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplxml3dzjyus4sdf1fyv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplxml3dzjyus4sdf1fyv.png" alt=" " width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The command encapsulation dot1q .&lt;/p&gt;

&lt;p&gt;Router&amp;gt; enable&lt;br&gt;
Router# configure terminal&lt;/p&gt;

&lt;p&gt;! Configure subinterface for VLAN 10&lt;br&gt;
Router(config)# interface gigabitethernet 0/0/0.10&lt;br&gt;
Router(config-subif)# encapsulation dot1q 10&lt;br&gt;
Router(config-subif)# ip address 192.168.10.1 255.255.255.0&lt;br&gt;
Router(config-subif)# exit&lt;/p&gt;

&lt;p&gt;! Configure subinterface for VLAN 20&lt;br&gt;
Router(config)# interface gigabitethernet 0/0/0.20&lt;br&gt;
Router(config-subif)# encapsulation dot1q 20&lt;br&gt;
Router(config-subif)# ip address 192.168.20.1 255.255.255.0&lt;br&gt;
Router(config-subif)# exit&lt;/p&gt;

&lt;p&gt;! Enable physical interface&lt;br&gt;
Router(config)# interface gigabitethernet 0/0/0&lt;br&gt;
Router(config-if)# no shutdown&lt;br&gt;
Router(config-if)# end&lt;/p&gt;

&lt;p&gt;! Verify configuration&lt;br&gt;
Router# show ip interface brief&lt;br&gt;
Without it, the router can’t tell which packet belongs to which VLAN. The flowchart here shows the relationship…..&lt;/p&gt;

&lt;p&gt;Physical interface up → two subinterfaces → each with its own tag and gateway IP. Once I see both subinterfaces “up/up” in the status, I know inter-VLAN routing is ready.&lt;/p&gt;

&lt;p&gt;Every clean network I’ve seen follows the rule “one VLAN, one subnet, one gateway.” You did exactly that with 192.168.10.0/24 and 192.168.20.0/24. Textbook good.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Make the router do the boring work (DHCP)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Static IPs are cute for a lab of one. For four (and growing), it’s just asking for conflicts. So I make the router the DHCP server for both VLANs. Two pools…&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3swufa07b2n9v6hirin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3swufa07b2n9v6hirin.png" alt=" " width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I always exclude the gateway addresses first so the pools don’t accidentally hand them out.&lt;/p&gt;

&lt;p&gt;Router# configure terminal&lt;/p&gt;

&lt;p&gt;! Exclude gateway addresses from DHCP pools&lt;br&gt;
Router(config)# ip dhcp excluded-address 192.168.10.1&lt;br&gt;
Router(config)# ip dhcp excluded-address 192.168.20.1&lt;/p&gt;

&lt;p&gt;! Create DHCP pool for VLAN 10 (SALES)&lt;br&gt;
Router(config)# ip dhcp pool SALES_POOL&lt;br&gt;
Router(dhcp-config)# network 192.168.10.0 255.255.255.0&lt;br&gt;
Router(dhcp-config)# default-router 192.168.10.1&lt;br&gt;
Router(dhcp-config)# dns-server 8.8.8.8&lt;br&gt;
Router(dhcp-config)# lease 7&lt;br&gt;
Router(dhcp-config)# exit&lt;/p&gt;

&lt;p&gt;! Create DHCP pool for VLAN 20 (ENGINEERING)&lt;br&gt;
Router(config)# ip dhcp pool ENGINEERING_POOL&lt;br&gt;
Router(dhcp-config)# network 192.168.20.0 255.255.255.0&lt;br&gt;
Router(dhcp-config)# default-router 192.168.20.1&lt;br&gt;
Router(dhcp-config)# dns-server 8.8.8.8&lt;br&gt;
Router(dhcp-config)# lease 3&lt;br&gt;
Router(dhcp-config)# end&lt;/p&gt;

&lt;p&gt;! Verify DHCP configuration&lt;br&gt;
Router# show ip dhcp pool&lt;br&gt;
And yes, I like giving Engineering a shorter lease — devices churn more, VM labs spin up/down, and shorter leases reduce stale bindings. It’s a tiny choice that makes long-term ops nicer.&lt;/p&gt;

&lt;p&gt;Add a second DNS (like 8.8.4.4). If the first resolver flakes, you won’t lose name resolution and assume the whole network is down.&lt;/p&gt;

&lt;p&gt;“Helper address” — when you need it, when you don’t&lt;br&gt;
This is a common trap. DHCP is broadcast. Broadcasts don’t cross VLANs. So usually, you put ip helper-address  on each L3 interface to relay the request as unicast to the DHCP server.&lt;/p&gt;

&lt;p&gt;Router# configure terminal&lt;/p&gt;

&lt;p&gt;! Configure DHCP relay on subinterfaces&lt;br&gt;
Router(config)# interface gigabitethernet 0/0/0.10&lt;br&gt;
Router(config-subif)# ip helper-address 192.168.10.1&lt;br&gt;
Router(config-subif)# exit&lt;/p&gt;

&lt;p&gt;Router(config)# interface gigabitethernet 0/0/0.20&lt;br&gt;
Router(config-subif)# ip helper-address 192.168.20.1&lt;br&gt;
Router(config-subif)# end&lt;/p&gt;

&lt;p&gt;! Verify helper addresses&lt;br&gt;
Router# show running-config | include helper-address&lt;br&gt;
But here’s the catch….&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzpgf2v970o89r70pmai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxzpgf2v970o89r70pmai.png" alt=" " width="800" height="728"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If the router itself is the DHCP server, it already hears the broadcast on each subinterface. There’s nowhere else to forward it. So helper addresses are optional here (not needed).&lt;/p&gt;

&lt;p&gt;I still keep a flow segment explaining helper logic, because the moment you move DHCP to a dedicated server (say, an IT/Admin VLAN 30), you’ll need it on each user VLAN subinterface pointing to that server’s IP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The four-step DHCP dance (watch it happen)&lt;/strong&gt;&lt;br&gt;
This is my favorite diagram because it gives the protocol a heartbeat….&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7w92xww00eybrhapir0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk7w92xww00eybrhapir0.png" alt=" " width="800" height="702"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to be extra sure name resolution works, don’t stop at ping 8.8.8.8. Try ping google.com. If numbers work but names don’t, it’s a DNS issue, not connectivity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Client side — Human loop&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;On each PC, I head to Desktop → IP Configuration → DHCP. Then I wait 10–20 seconds. Packet Tracer sometimes takes a breath here. If it stalls, I do a quick release/renew from the PC’s command prompt. No drama.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z0oaoezmytq8hqqo0g5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8z0oaoezmytq8hqqo0g5.png" alt=" " width="800" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Expected first four leases:&lt;/p&gt;

&lt;p&gt;PC0 → 192.168.10.2&lt;br&gt;
PC1 → 192.168.10.3&lt;br&gt;
PC2 → 192.168.20.2&lt;br&gt;
PC3 → 192.168.20.3&lt;br&gt;
If any PC lands in 169.254.x.x (APIPA), it never heard from DHCP. That usually means VLAN/trunk trouble, or the pool doesn’t match the subnet.&lt;/p&gt;

&lt;p&gt;The verification ritual (don’t skip this)&lt;br&gt;
I treat verification like a checklist so I don’t get emotionally attached to “it should work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x353b2drbtclvnnbs5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3x353b2drbtclvnnbs5d.png" alt=" " width="800" height="177"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the router — confirm subinterfaces are up, pools exist, bindings are being issued.&lt;br&gt;
! Check DHCP pools&lt;br&gt;
Router# show ip dhcp pool&lt;/p&gt;

&lt;p&gt;! View current leases&lt;br&gt;
Router# show ip dhcp binding&lt;/p&gt;

&lt;p&gt;! Verify interface status&lt;br&gt;
Router# show ip interface brief&lt;br&gt;
On the switch — VLAN membership is correct, trunk is actually trunking, allowed VLANs include 10 and 20.&lt;br&gt;
! Verify VLAN configuration&lt;br&gt;
Switch# show vlan brief&lt;/p&gt;

&lt;p&gt;! Check trunk status&lt;br&gt;
Switch# show interfaces trunk&lt;br&gt;
On the PCs — ipconfig /all shows correct IP, mask, gateway, DNS.&lt;/p&gt;

&lt;h1&gt;
  
  
  Check IP configuration
&lt;/h1&gt;

&lt;p&gt;ipconfig /all&lt;/p&gt;

&lt;h1&gt;
  
  
  Test connectivity
&lt;/h1&gt;

&lt;p&gt;ping 192.168.10.1    # Gateway&lt;br&gt;
ping 192.168.20.2    # Cross-VLAN&lt;br&gt;
ping 8.8.8.8         # Internet DNS&lt;br&gt;
Green. Green. Green. Only then I breathe.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For Further Troubleshooting…And Refrence..&lt;/strong&gt;&lt;br&gt;
GitHub - nishanthabimanyu/Cisco-Packet-Tracer-Workbook-&lt;br&gt;
Contribute to nishanthabimanyu/Cisco-Packet-Tracer-Workbook- development by creating an account on GitHub.&lt;br&gt;
github.com&lt;/p&gt;

&lt;p&gt;Press enter or click to view image in full size&lt;/p&gt;

&lt;p&gt;If anything fails, go back and check configuration step by step. Common issues include missing ip helper-address commands, incorrect VLAN assignments on switch ports, or firewall rules blocking traffic.&lt;/p&gt;

&lt;p&gt;And that’s it! you have successfully built a multi-VLAN network with DHCP services.&lt;/p&gt;

&lt;p&gt;The entire process took me about 30 minutes, but the learning will last much longer. Remember to save your configuration with copy running-config startup-config on both switch and router!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>gpt3</category>
      <category>startup</category>
    </item>
    <item>
      <title>This IIT-Backed Startup Can Run Llama 2 on CPUs… No Need GPUs to Run AI Anymore..</title>
      <dc:creator>Nishanth Abimanyu</dc:creator>
      <pubDate>Mon, 09 Jun 2025 07:59:52 +0000</pubDate>
      <link>https://dev.to/nishanth_abimanyu_001/this-iit-backed-startup-can-run-llama-2-on-cpus-no-need-gpus-to-run-ai-anymore-3im2</link>
      <guid>https://dev.to/nishanth_abimanyu_001/this-iit-backed-startup-can-run-llama-2-on-cpus-no-need-gpus-to-run-ai-anymore-3im2</guid>
      <description>&lt;p&gt;I was just opening LinkedIn casually — that usual scroll before sleep, and suddenly one post caught my eye.&lt;/p&gt;

&lt;p&gt;Runs LLMs like T5 and Bloom-7B without GPU, just on CPU.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekqzmqvd7nazqgsqdfwu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fekqzmqvd7nazqgsqdfwu.png" alt="Image description" width="800" height="580"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I blinked twice. At first, I thought it’s just another startup doing the same “no GPU required” drama.&lt;/p&gt;

&lt;p&gt;We’ve all seen those lines.. “Edge-friendly, lightweight, runs offline...” and then boom — nothing new.&lt;/p&gt;

&lt;p&gt;But this one felt different. The name? Kompact AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Even Is Kompact AI? (And Why It Feels Legit)
&lt;/h2&gt;

&lt;p&gt;From what I gathered, Kompact AI is building something called ICAN — a Common AI-Language Runtime. Basically, a system that supports 10+ programming languages and makes AI models run efficiently on CPUs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here’s where it gets real….&lt;/strong&gt;&lt;br&gt;
They’re not just talking about basic models. They’re saying they can run inference, fine-tuning, and even light training of models like T5 and Bloom-7B — on CPUs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fia0cbdhtnsbbp3hz3vte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fia0cbdhtnsbbp3hz3vte.png" alt="Image description" width="800" height="769"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Exactly Is AI, and Why Does It Need So Much Power?
&lt;/h2&gt;

&lt;p&gt;At its core, AI—particularly deep learning—relies on neural networks that perform massive amounts of matrix computations. These computations are needed for two key processes….&lt;/p&gt;

&lt;h2&gt;
  
  
  Training
&lt;/h2&gt;

&lt;p&gt;This is when a model learns from data. It adjusts itself based on feedback and gets better with time. &lt;br&gt;
It’s computationally heavy and typically requires specialized hardware like GPUs or TPUs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inference.....
&lt;/h2&gt;

&lt;p&gt;Once the model is trained, we use it to make predictions. Inference requires less computational power than training, but it’s still demanding, especially for large models or real-time tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  So, What Makes Kompact AI Different from the Rest?
&lt;/h2&gt;

&lt;p&gt;Here’s where Kompact AI stands out: it claims to run heavy AI models like T5 and Bloom-7B on CPUs—no GPUs required. But wait, isn’t that supposed to be impossible?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CPUs&lt;/strong&gt;&lt;br&gt;
These are general-purpose processors designed for a wide range of tasks. They’re great at complex, sequential logic and have fewer cores (usually 4–16) than GPUs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GPUs&lt;/strong&gt;&lt;br&gt;
Designed for parallel tasks, GPUs have thousands of smaller cores optimized for repetitive, matrix-heavy operations, making them ideal for AI.&lt;br&gt;
So how does Kompact AI make running AI models like T5 on a CPU even possible? What kind of tech is happening behind the scenes?&lt;/p&gt;

&lt;h2&gt;
  
  
  What Could Be Going On Behind the Scenes? Let’s Break It Down.
&lt;/h2&gt;

&lt;p&gt;There are three main factors we need to address to run AI effectively on CPUs.....&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Computational Requirements&lt;/strong&gt;: AI models need substantial computing power &lt;br&gt;
for matrix operations. Can CPUs handle these heavy tasks?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hardware Capabilities&lt;/strong&gt;: CPUs have strengths like multi-core processing and large caches, but they lack GPU-level parallelism. How does Kompact AI manage these differences?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Software Optimization&lt;/strong&gt;: The way the software is written makes a huge difference in how well AI models run on CPUs. What kind of optimizations could make it work?&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s dive deeper into these aspects and try to understand what’s going on.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do We Simplify AI to Make It Run on a CPU?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Computational Requirements: AI models, especially large language models (LLMs) like T5 or Bloom-7B, have billions of parameters. Running these models, especially for tasks like training, can be computationally intensive. But, is there a way to make these tasks less demanding?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Inference&lt;/strong&gt;: This is when we use a pre-trained model to make predictions. It’s much less computationally demanding than training.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fine-tuning&lt;/strong&gt;: This is when you adjust an already trained model on new data. It’s more demanding than inference but still much lighter than full-scale training.&lt;br&gt;
Could Kompact AI focus on these lighter tasks—like inference and fine-tuning—so that it avoids the heavy computational cost of full-scale training?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hardware Capabilities&lt;/strong&gt;: CPUs, while not as parallel as GPUs, are still quite powerful in their own right. Could Kompact AI be taking advantage of things like….&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Multiple cores that can handle tasks simultaneously.&lt;br&gt;
SIMD (Single Instruction, Multiple Data), allowing the CPU to process multiple pieces of data within each core.&lt;br&gt;
Large caches that reduce memory latency?&lt;/p&gt;

&lt;h2&gt;
  
  
  How does Kompact AI tap into these strengths of CPUs to handle AI workloads efficiently?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Software Optimization&lt;/strong&gt;: Most AI frameworks (like TensorFlow or PyTorch) are designed to take advantage of GPUs. On CPUs, this often leads to underutilization of resources.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Could Kompact AI be doing something radically different with its software optimization to make the most of CPU power?&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How Do We Rebuild the AI Solution for CPUs?
&lt;/h2&gt;

&lt;p&gt;What if Kompact AI's approach works by rethinking how AI models are structured and executed? Could it work like this?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Model Optimization&lt;/strong&gt;: By quantizing, pruning, and distilling large models, could Kompact AI reduce the computational load, making it feasible for CPU-based systems to handle tasks like inference or fine-tuning efficiently?&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Leverage CPU Strengths:&lt;/strong&gt; Could Kompact AI split computations across multiple CPU cores and use SIMD instructions to handle matrix operations without needing GPU-like parallelism?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimized Runtime&lt;/strong&gt;: What if Kompact AI uses a custom runtime environment (maybe something like ICAN) tailored to CPUs, optimizing code execution and minimizing resource usage?&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Does this mean we can make large models like Bloom-7B run smoothly on a CPU after all?&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does It All Come Together to Make AI Work on CPUs?
&lt;/h2&gt;

&lt;p&gt;By combining model optimization, multi-core parallelism, SIMD, and an optimized runtime, how does Kompact AI make running large AI models like Bloom-7B on CPUs actually work?&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Could it be that Kompact AI works like this?&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Quantized and pruned versions of models that reduce their size, making them manageable on a CPU.&lt;/li&gt;
&lt;li&gt;Parallel execution across multiple CPU cores to handle large models by distributing the workload efficiently.&lt;/li&gt;
&lt;li&gt;The ICAN runtime ensures minimal overhead, achieving up to 3x the performance of traditional CPU execution in frameworks like TensorFlow or PyTorch.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Is it possible that Kompact AI can bypass the need for GPUs and still deliver solid performance?&lt;/p&gt;

&lt;h2&gt;
  
  
  So, What’s the Big Picture?
&lt;/h2&gt;

&lt;p&gt;Kompact AI does something groundbreaking: it rethinks how AI models are structured and executed. Could this be the future of AI?&lt;/p&gt;

&lt;p&gt;Through techniques like quantization, pruning, and distillation, along with efficient multi-core parallelism and a custom runtime (ICAN), Kompact AI makes AI models run efficiently on CPUs—no GPUs required. Is this the kind of innovation that could democratize AI, making it more accessible for offline and edge devices, where specialized hardware like GPUs may not be available?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>gpt3</category>
      <category>startup</category>
    </item>
    <item>
      <title>hacking</title>
      <dc:creator>Nishanth Abimanyu</dc:creator>
      <pubDate>Mon, 09 Jun 2025 07:36:21 +0000</pubDate>
      <link>https://dev.to/nishanth_abimanyu_001/hacking-59d0</link>
      <guid>https://dev.to/nishanth_abimanyu_001/hacking-59d0</guid>
      <description></description>
    </item>
  </channel>
</rss>
