<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kernel Cero</title>
    <description>The latest articles on DEV Community by Kernel Cero (@kernelcero).</description>
    <link>https://dev.to/kernelcero</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kernelcero"/>
    <language>en</language>
    <item>
      <title>🤖 Building a Private, Local WhatsApp AI Assistant with Node.js &amp; Ollama</title>
      <dc:creator>Kernel Cero</dc:creator>
      <pubDate>Fri, 10 Apr 2026 23:45:07 +0000</pubDate>
      <link>https://dev.to/kernelcero/building-a-smart-whatsapp-assistant-with-nodejs-wppconnect-466c</link>
      <guid>https://dev.to/kernelcero/building-a-smart-whatsapp-assistant-with-nodejs-wppconnect-466c</guid>
      <description>&lt;p&gt;Hello, dev community! 👋 I’ve been working on a personal project lately: a WhatsApp AI Bot that actually keeps track of conversations. No more "forgetful" bots, and best of all: it runs entirely on my own hardware! 🧠💻&lt;br&gt;
🛠️ The Tech Stack&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Runtime: Node.js 🟢

AI Engine: Ollama (Running Llama 3 / Mistral locally) 🦙

WhatsApp Interface: WPPConnect 📱

Database: SQLite for persistent conversation memory 🗄️

OS: Linux 🐧
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;🚀 The Journey&lt;/p&gt;

&lt;p&gt;The goal was to create an assistant that doesn't rely on external APIs like OpenAI. By combining WPPConnect with Ollama, I have full control over the data and the model.&lt;/p&gt;

&lt;p&gt;Here is the project structure:&lt;br&gt;
Bash&lt;/p&gt;

&lt;p&gt;user@remote-server:~/whatsapp-bot$ ls&lt;br&gt;
database.db        # Long-term memory (SQLite)&lt;br&gt;
node_modules       # The heavy lifters&lt;br&gt;
package.json       # Project DNA&lt;br&gt;
server.js          # The brain connecting WPPConnect + Ollama&lt;br&gt;
tokens/            # Session persistence (No need to re-scan QR)&lt;/p&gt;

&lt;p&gt;🔍 Key Features&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Local Intelligence: Using Ollama means zero latency from external servers and 100% privacy.

True Context: Instead of stateless replies, I use SQLite to feed the previous chat history back into Ollama. It remembers who you are! 🔄

Session Persistence: Thanks to the tokens folder, the bot stays logged in even after a server reboot.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;💡 Quick Snippet (The Connection)&lt;/p&gt;

&lt;p&gt;Here is how I bridge the WhatsApp message to the local Ollama instance:&lt;br&gt;
JavaScript&lt;/p&gt;

&lt;p&gt;const wppconnect = require('@wppconnect-team/wppconnect');&lt;br&gt;
const axios = require('axios'); // To talk to Ollama's local API&lt;/p&gt;

&lt;p&gt;async function askOllama(prompt) {&lt;br&gt;
    const response = await axios.post('&lt;a href="http://localhost:11434/api/generate" rel="noopener noreferrer"&gt;http://localhost:11434/api/generate&lt;/a&gt;', {&lt;br&gt;
        model: 'llama3',&lt;br&gt;
        prompt: prompt,&lt;br&gt;
        stream: false&lt;br&gt;
    });&lt;br&gt;
    return response.data.response;&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;wppconnect.create({ session: 'ai-session' })&lt;br&gt;
    .then((client) =&amp;gt; {&lt;br&gt;
        client.onMessage(async (message) =&amp;gt; {&lt;br&gt;
            const aiReply = await askOllama(message.body);&lt;br&gt;
            await client.sendText(message.from, aiReply);&lt;br&gt;
        });&lt;br&gt;
    });&lt;/p&gt;

&lt;p&gt;🚧 What's next?&lt;/p&gt;

&lt;p&gt;I'm working on "System Prompts" to give the bot a specific personality and improving the SQLite query speed for massive chat histories.&lt;/p&gt;

&lt;p&gt;Are you running LLMs locally? I’d love to hear how you optimize Ollama performance for real-time chat! 👇&lt;/p&gt;

&lt;h1&gt;
  
  
  nodejs #ollama #ai #wppconnect #javascript #opensource #linux
&lt;/h1&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>node</category>
      <category>showdev</category>
    </item>
    <item>
      <title>🚀 Why I Ditched venv for uv: My Python Migration Journey on Linux Mint</title>
      <dc:creator>Kernel Cero</dc:creator>
      <pubDate>Sun, 29 Mar 2026 08:04:41 +0000</pubDate>
      <link>https://dev.to/kernelcero/why-i-ditched-venv-for-uv-my-python-migration-journey-on-linux-mint-334n</link>
      <guid>https://dev.to/kernelcero/why-i-ditched-venv-for-uv-my-python-migration-journey-on-linux-mint-334n</guid>
      <description>&lt;p&gt;Hello world, kernelcero here! 💻&lt;/p&gt;

&lt;p&gt;If you've been in the Python ecosystem for a while, you know the "dependency dance": creating a venv, activating it, waiting for pip to resolve conflicts, and ending up with a massive site-packages folder that eats your disk space. 🐢&lt;/p&gt;

&lt;p&gt;I recently decided to migrate my entire workflow to uv, and honestly? There is no going back.&lt;br&gt;
What is uv? 🤔&lt;/p&gt;

&lt;p&gt;Created by the team at Astral (the folks behind Ruff), uv is an extremely fast Python package installer and resolver written in Rust. It’s designed to replace pip, pip-tools, and virtualenv in one fell swoop.&lt;br&gt;
🔥 Why the switch was a no-brainer&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Blazing Fast ⚡: It’s 10x to 100x faster than pip. It uses a global cache, so if you've downloaded a library once, it's instantly available for your next project without re-downloading.

Built-in Python Management 🐍: No more messing with deadsnakes PPAs or apt for specific Python versions. Need Python 3.12 on an older Mint install? Just run uv python install 3.12.

The Power of uv.lock 🔒: It creates a deterministic lockfile. This means "it works on my machine" actually translates to "it works on EVERY machine."

Single Binary 🛠️: No dependencies required to install the tool itself. Clean and efficient.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The Python community is shifting towards faster, Rust-powered tooling. If you value your time and your disk space, give uv a try. It feels like a superpower for your terminal.&lt;/p&gt;

&lt;p&gt;Are you still rocking the traditional venv + pip combo, or have you made the jump to Rust-based tooling? Let's discuss in the comments! 👇&lt;/p&gt;

&lt;h1&gt;
  
  
  python #linux #rust #productivity #softwaredevelopment
&lt;/h1&gt;

</description>
    </item>
    <item>
      <title>Taking Morgan Willis’s 10-Minute Agent to the Next Level: Meet Alfred</title>
      <dc:creator>Kernel Cero</dc:creator>
      <pubDate>Sat, 28 Mar 2026 00:21:30 +0000</pubDate>
      <link>https://dev.to/kernelcero/taking-morgan-williss-10-minute-agent-to-the-next-level-meet-alfred-5484</link>
      <guid>https://dev.to/kernelcero/taking-morgan-williss-10-minute-agent-to-the-next-level-meet-alfred-5484</guid>
      <description>&lt;p&gt;I recently came across this fantastic post by &lt;a class="mentioned-user" href="https://dev.to/morganwilliscloud"&gt;@morganwilliscloud&lt;/a&gt;: "AI Agents don’t need complex workflows: Build one in Python in 10 minutes." Her point was clear: we often overcomplicate Agentic AI. You don't need massive enterprise frameworks to build something functional. Inspired by that "build-to-understand" philosophy, I decided to create Alfred—my personal digital butler running locally on my Linux Mint machine.&lt;br&gt;
🧠 The Brains: Local &amp;amp; Private&lt;/p&gt;

&lt;p&gt;Following Morgan’s lead, I skipped the heavy cloud APIs. Instead, I used Ollama with the Qwen 2.5 Coder (7B) model. It’s incredibly snappy at function calling and runs entirely on my hardware.&lt;br&gt;
🛠️ Giving Alfred "Hands" (The Tools)&lt;/p&gt;

&lt;p&gt;While a basic agent might just do math, I wanted Alfred to actually manage my OS. Using Python’s subprocess and psutil, I gave him four specific skills:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;System Health Check: He monitors CPU, RAM, and Disk usage in real-time.

Multimedia Control: He scans my ~/Videos and ~/Music folders to launch VLC or Audacious on command.

Deep Search: A smart wrapper around the find command to locate any file across the system.

Content Discovery: Using grep to find specific strings inside text files without me having to remember complex flags.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;💻 The Implementation&lt;/p&gt;

&lt;p&gt;I used a lightweight orchestration layer to handle the loop, ensuring Alfred maintains his "impeccable British butler" persona while executing technical tasks.&lt;/p&gt;

&lt;h1&gt;
  
  
  The heart of the Butler
&lt;/h1&gt;

&lt;p&gt;agente = Agent(&lt;br&gt;
    model=model,&lt;br&gt;
    tools=[media_play, system_status, fast_search, find_text],&lt;br&gt;
    system_prompt=(&lt;br&gt;
        "You are ALFRED, kernel's digital butler. Your tone is impeccable and polite.\n"&lt;br&gt;
        "RULES:\n"&lt;br&gt;
        "- ALWAYS call a tool if the request is technical.\n"&lt;br&gt;
        "- Be brief and address him as 'Sir' or 'kernel'."&lt;br&gt;
    )&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;🛡️ Security First&lt;/p&gt;

&lt;p&gt;As Morgan emphasizes in her cloud architecture sessions, automation must be secure. Since Alfred has access to my shell, I implemented shlex for input sanitization:&lt;/p&gt;

&lt;h1&gt;
  
  
  Sanitizing user input for safe shell execution
&lt;/h1&gt;

&lt;p&gt;safe_query = shlex.quote(query.strip())&lt;br&gt;
cmd = f"find {USER_HOME} -iname '&lt;em&gt;{query}&lt;/em&gt;' -type f 2&amp;gt;/dev/null | head -n 10"&lt;/p&gt;

&lt;p&gt;🚀 Why This Matters&lt;/p&gt;

&lt;p&gt;Building this taught me that the "10-minute agent" isn't just a toy—it's a foundation. Alfred now saves me several minutes a day by fetching files and checking system vitals through a simple chat interface.&lt;/p&gt;

&lt;p&gt;A huge thanks to &lt;a class="mentioned-user" href="https://dev.to/morganwilliscloud"&gt;@morganwilliscloud&lt;/a&gt;  for stripping away the complexity and showing that the best way to learn AI is to give it the keys to your terminal (carefully!).&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>python</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
