DEV Community

Daegwang
Daegwang

Posted on

Why I Built a Personal AI Assistant and Kept It Small

I like the idea of a personal AI assistant, but I do not like how heavy most of them feel.

When you actually try to use them, there is often too much going on. Huge system prompts, too much token overhead, frameworks that are hard to trust, and too many layers between what you ask for and what actually happens.

That is why I built Atombot.

Atombot is a small personal AI assistant inspired by OpenClaw and nanobot. I was not trying to build a complex agent platform. I wanted something simpler, something I could understand, change, and actually use.


Privacy also matter.

A personal assistant handles my personal data, and I do not want to send that data outside of my machine.

That made local LLM support important to me. Local models can run heavier assistant frameworks, but from my experience, they do not perform well. Local models usually cannot handle large context windows because of too much overhead from system prompts, instructions, tool definitions, and extra logic. It works fine for API-based LLMs, but it becomes a much worse fit for local models.

I wanted something lighter, so local setups would feel realistic instead of frustrating. I focused on the main functionality I actually wanted.

  • local LLM support with simple onboarding
  • a codebase small enough to understand and modify
  • persistent memory
  • reminders and scheduled tasks
  • Telegram support so I can talk to it outside the terminal

Here is the two example use cases:

It can explore websites, summarize what it finds, and handle one-time or recurring reminders. These are just two examples, and you can find more here.

If you want to try it, Please check here.


Note: This post was originally published on my site - link

Top comments (0)