I built a fully autonomous AI agent that can modify its own code (AION)
Most “AI agents” today are just chatbots with tools.
I wanted something different.
So I built AION — Autonomous Intelligent Operations Node: a self-contained AI system that doesn’t just respond… it acts, evolves, and runs independently on your machine.
🚀 What AION actually is
AION is a Python-based autonomous agent that:
- runs locally (no cloud infra required beyond LLM APIs)
- streams responses in real-time
- executes tools in loops (up to 50 iterations)
- schedules tasks while you're offline
- controls a real browser
- communicates across Telegram, Discord, Slack, Alexa
- and — most interestingly — can modify its own code
Think of it less like ChatGPT… and more like a persistent AI operator.
AION is a self-contained autonomous AI agent that runs as a Python process on your machine It streams responses live, executes tools, schedules tasks, controls a browser, sends messages across platforms — and can read, patch, and extend its own code.
No cloud dependency beyond the LLM API. Everything else runs locally.
✦ What makes AION different
| Feature | AION | Typical chatbot |
|---|---|---|
| Runs tools autonomously (up to 50 iterations) | ✅ | ❌ |
| Modifies its own code and creates plugins | ✅ | ❌ |
| Schedules tasks that run while you're away | ✅ | ❌ |
| Controls a real browser (Playwright) | ✅ | ❌ |
| Works via Telegram, Discord, Slack, Alexa | ✅ | ❌ |
| Multi-provider with automatic failover | ✅ | ❌ |
| Use Claude subscription instead of API key | ✅ | ❌ |
| Personality that evolves through conversation | ✅ | ❌ |
| 100% local memory + history | ✅ | ❌ |
⚡ Quick Start
# Install dependencies
pip install -r requirements.txt
pip install -e .…Feel free to participate.
🤯 The moment it got interesting
The turning point wasn’t tool usage.
It was when AION successfully did this:
“Add a tool that fetches the Bitcoin price”
And then:
- Read existing plugins
- Generated a new plugin
- Wrote it into
/plugins/btc_price/ - Hot-reloaded itself
- Used the tool immediately
No restart. No manual coding.
That’s when it stopped feeling like a script… and started feeling like a system.
🧠 Core idea: the LLM loop
At the heart of AION is a simple but powerful loop:
User input
→ LLM reasoning
→ Tool calls
→ Results
→ Repeat (until task is actually complete)
Two important guards make this work:
- Completion check → prevents “I did X” without actually doing it
- Task enforcer → forces continuation if the job isn’t finished
This is what allows real autonomy instead of fake “assistant behavior”.
🔌 Everything is a plugin
AION is fully modular.
Each capability is just a plugin:
- browser automation (Playwright)
- messaging bots
- file system access
- scheduling
- multi-agent delegation
- audio pipeline (speech ↔ text)
- credential vault
- even model providers
Adding a tool is as simple as:
def register(api):
def my_tool(param: str = "", **_):
return {"ok": True, "result": f"processed: {param}"}
api.register_tool(...)
And yes — AION can write these itself.
🌐 It’s not just local — it’s everywhere
AION runs as a single process, but can interact across:
- Web UI (FastAPI + streaming)
- CLI
- Telegram (including voice)
- Discord
- Slack
- Alexa (via API endpoint)
Each channel has isolated memory, so contexts don’t bleed into each other.
⏰ Real autonomy: scheduled tasks
This is where it becomes actually useful.
You can say:
- “Every morning at 7, send me a weather summary”
- “Check AI news every hour”
- “Remind me every 30 minutes to take a break”
And it just… does it.
No cron jobs. No external services.
🔐 Local-first philosophy
Everything important stays on your machine:
- memory
- conversation history
- credentials (encrypted vault)
- plugins
- configs
The only external dependency is the LLM provider.
This was a deliberate design choice — control > convenience.
🧩 Multi-model + routing
AION supports multiple providers at once:
- Gemini
- OpenAI
- Claude
- DeepSeek
- Grok
- Ollama (local)
You can even route tasks automatically:
"coding": "claude-opus",
"browsing": "gemini-flash",
"default": "gemini-pro"
So the system picks the best model per task — dynamically.
🧪 What I learned building this
1. Autonomy is mostly about constraints
Without guardrails, agents either:
- stop too early
- or loop forever
The real challenge is knowing when something is actually done.
2. Tools > prompting
Prompt engineering helps.
But giving an AI:
- memory
- tools
- iteration
…is what actually makes it useful.
3. Self-modification is powerful (and scary)
Letting an AI change its own code is:
- insanely productive
- but requires strict confirmation + diff checks
Otherwise things can go sideways fast.
🔮 What’s next
There’s still a lot to explore:
- better long-term memory
- safer self-modification
- collaborative multi-agent systems
- deeper OS integration
But even now, AION already feels like a glimpse of what “personal AI systems” could become.
🛠 If you want to try it
You can run it locally with:
pip install -r requirements.txt
pip install -e .
aion --setup
aion
Then choose:
- Web UI
- or CLI
…and start giving it real tasks.
Final thought
We’re moving from:
“AI that answers”
to:
“AI that operates”
AION is my attempt to push in that direction.
Curious what others are building in this space 👀
Top comments (0)