DEV Community

Cover image for πŸ€– Building a Private, Local WhatsApp AI Assistant with Node.js & Ollama
Kernel Cero
Kernel Cero

Posted on

πŸ€– Building a Private, Local WhatsApp AI Assistant with Node.js & Ollama

Hello, dev community! πŸ‘‹ I’ve been working on a personal project lately: a WhatsApp AI Bot that actually keeps track of conversations. No more "forgetful" bots, and best of all: it runs entirely on my own hardware! πŸ§ πŸ’»
πŸ› οΈ The Tech Stack

Runtime: Node.js 🟒

AI Engine: Ollama (Running Llama 3 / Mistral locally) πŸ¦™

WhatsApp Interface: WPPConnect πŸ“±

Database: SQLite for persistent conversation memory πŸ—„οΈ

OS: Linux 🐧
Enter fullscreen mode Exit fullscreen mode

πŸš€ The Journey

The goal was to create an assistant that doesn't rely on external APIs like OpenAI. By combining WPPConnect with Ollama, I have full control over the data and the model.

Here is the project structure:
Bash

user@remote-server:~/whatsapp-bot$ ls
database.db # Long-term memory (SQLite)
node_modules # The heavy lifters
package.json # Project DNA
server.js # The brain connecting WPPConnect + Ollama
tokens/ # Session persistence (No need to re-scan QR)

πŸ” Key Features

Local Intelligence: Using Ollama means zero latency from external servers and 100% privacy.

True Context: Instead of stateless replies, I use SQLite to feed the previous chat history back into Ollama. It remembers who you are! πŸ”„

Session Persistence: Thanks to the tokens folder, the bot stays logged in even after a server reboot.
Enter fullscreen mode Exit fullscreen mode

πŸ’‘ Quick Snippet (The Connection)

Here is how I bridge the WhatsApp message to the local Ollama instance:
JavaScript

const wppconnect = require('@wppconnect-team/wppconnect');
const axios = require('axios'); // To talk to Ollama's local API

async function askOllama(prompt) {
const response = await axios.post('http://localhost:11434/api/generate', {
model: 'llama3',
prompt: prompt,
stream: false
});
return response.data.response;
}

wppconnect.create({ session: 'ai-session' })
.then((client) => {
client.onMessage(async (message) => {
const aiReply = await askOllama(message.body);
await client.sendText(message.from, aiReply);
});
});

🚧 What's next?

I'm working on "System Prompts" to give the bot a specific personality and improving the SQLite query speed for massive chat histories.

Are you running LLMs locally? I’d love to hear how you optimize Ollama performance for real-time chat! πŸ‘‡

nodejs #ollama #ai #wppconnect #javascript #opensource #linux

Top comments (0)