This is a submission for the OpenClaw Challenge.
What I Built
I built OpenClaw Task Manager — an AI-powered personal task assistant that runs entirely on AWS Cloud9 with no external APIs and no data leaving your server.
The problem it solves is simple: most task managers are dumb. You type a task, it stores it. That's it. I wanted something smarter — where you type in plain English and an AI understands, summarizes, and organizes it for you automatically.
The stack:
- 🧠 LLaMA 3.2 (via Ollama) — local AI brain
- ⚙️ Flask — lightweight backend agent API
- 💻 Python CLI — simple terminal interface
- ☁️ AWS Cloud9 + EC2 — fully cloud-hosted deployment
- 📁 JSON — local task storage
👉 GitHub: MakendranG/openclaw-task-manager
How I Used OpenClaw
OpenClaw acts as the intelligent middleware between raw user input and structured task storage. Here is the full workflow I set up:
1. User Input (CLI)
The user types a task in plain natural language via app.py:
Enter command: add
Enter task: Remind me to deploy tomorrow at 10 AM
2. Agent Processing (Flask API)
app.py sends the task as a POST request to the OpenClaw agent running on port 3000:
res = requests.post("http://127.0.0.1:3000/task", json={"text": text})
3. AI Summarization (LLaMA 3.2 via Ollama)
The agent passes the task to LLaMA 3.2 with a focused prompt:
ask_ollama(f"In one short sentence, rewrite this as a clear task: {text}. Reply with only the task sentence, nothing else.")
4. Storage
The task is saved to tasks.json with full metadata — ID, original text, AI summary, timestamp, and completion status.
3 REST Endpoints Powering the App:
| Method | Endpoint | Description |
|---|---|---|
| POST | /task |
Add and AI-process a new task |
| GET | /tasks |
Retrieve all stored tasks |
| GET | /health |
Confirm agent is alive |
Demo
Terminal 1 — AI Agent running:
* Serving Flask app 'openclaw_agent'
* Running on http://0.0.0.0:3000
Terminal 2 — Adding a task:
OpenClaw Task Manager
Commands: add, list, exit
Enter command: add
Enter task: Remind me to deploy tomorrow at 10 AM
Status code: 200
✅ Task added: {
"id": 1,
"text": "Remind me to deploy tomorrow at 10 AM",
"ai_summary": "Deploy the application tomorrow at 10 AM.",
"done": false,
"created_at": "2026-04-26T15:44:45.014827"
}
Listing all tasks:
Enter command: list
📋 Current Tasks:
1. ⏳ Remind me to deploy tomorrow at 10 AM
Health check:
curl -s http://127.0.0.1:3000/health
{"status": "ok"}
Direct API test:
curl -s -X POST http://127.0.0.1:3000/task \
-H "Content-Type: application/json" \
-d '{"text": "Finish the OpenClaw blog post"}' | python -m json.tool
{
"status": "added",
"task": {
"ai_summary": "Complete the OpenClaw blog post.",
"created_at": "2026-04-26T16:00:00.000000",
"done": false,
"id": 2,
"text": "Finish the OpenClaw blog post"
}
}
👉 Full source code: github.com/MakendranG/openclaw-task-manager
What I Learned
1. Ollama on EC2 is surprisingly easy
A single curl install script set up Ollama with a systemd service automatically. No GPU required — it runs fine in CPU-only mode on a standard EC2 instance. Slower, but functional.
2. Model size changes everything
I started with llama3.2:1b and got noisy, confused responses. Switching to the full llama3.2 (3B) made the AI summaries clean and accurate immediately. For production use, always test multiple model sizes before settling.
3. Port conflicts are a real gotcha on Cloud9
Cloud9 persists your environment between sessions, so old processes keep running on ports even after you close the browser. Always run:
sudo fuser -k 3000/tcp
before restarting your agent.
4. Disk space fills up fast with AI models
LLaMA 3.2 is about 2GB. My EC2 instance started at 10GB and hit 100% disk usage. I had to expand the EBS volume from 10GB to 500GB and run xfs_growfs to reclaim the space. Always size your storage generously before pulling models.
5. The architecture is genuinely hackable
Adding a new OpenClaw skill is just adding a new Flask route. I can already see how to extend this with /reminder, /prioritize, or a full web UI frontend. The pattern is clean and composable.
ClawCon Michigan
I didn't attend ClawCon Michigan in person, but the energy of the IRL OpenClaw community inspired this entire build. The core idea behind OpenClaw — that personal AI should run locally, on your own hardware, under your own control — really resonates with me as a developer.
This project is proof that you don't need cloud AI APIs or subscriptions to build something genuinely useful. A free EC2 instance, Ollama, and an open-source model is all it takes. That's the spirit of OpenClaw, and that's what I wanted to demonstrate with this build. 🦞
Top comments (0)