This is a submission for the OpenClaw Writing Challenge
Personal AI should belong to you. Not a subscription. Not a third-party server. Not someone else's cloud. Just your machine, your model, your data.
That's exactly what OpenClaw makes possible — and in this guide, I'll walk you through how I set it up from scratch on AWS Cloud9, the challenges I ran into, and what I learned along the way.
What is OpenClaw?
OpenClaw is an open-source personal AI assistant platform that you self-host. Think of it as your own private AI that runs on your hardware, uses open-source language models, and never sends your data anywhere you don't control.
What makes it powerful:
- 🔒 Privacy-first — everything runs locally
- 🧩 Hackable — build custom skills and integrations
- 🆓 Free — no API costs, no subscriptions
- 🌐 Deployable anywhere — your laptop, a Raspberry Pi, or a cloud VM
Why AWS Cloud9?
Cloud9 is AWS's browser-based IDE that comes with a full EC2 instance behind it. It's perfect for OpenClaw because:
- You get a real Linux environment instantly
- No local setup required — works from any browser
- Easy to scale storage and compute as needed
- Always online — your OpenClaw instance keeps running
Step 1: Set Up Your EC2 Instance
Before anything else, make sure your EC2 instance has enough storage. AI models are large — LLaMA 3.2 alone is about 2GB.
Expand your EBS volume to at least 30GB:
- AWS Console → EC2 → Volumes
- Select your volume → Actions → Modify Volume
- Change size to 30GB → Modify
Then resize the filesystem inside Cloud9:
sudo growpart /dev/nvme0n1 1
sudo xfs_growfs -d /
df -h # verify the new size
Step 2: Install Ollama
Ollama is the engine that runs open-source language models locally. Installing it on EC2 takes one command:
curl -fsSL https://ollama.com/install.sh | sh
Verify it's running:
ollama --version
curl http://127.0.0.1:11434
Step 3: Pull a Language Model
I recommend starting with LLaMA 3.2 (3B) — it's fast enough on CPU and produces quality responses:
ollama pull llama3.2
Test it immediately:
ollama run llama3.2 "What is OpenClaw in one sentence?"
💡 Tip: Avoid
llama3.2:1bfor production use — the 1B model gives inconsistent, low-quality responses. The 3B model is worth the extra 1.5GB.
Step 4: Build the OpenClaw Agent
Create your project folder:
mkdir ~/environment/openclaw-demo
cd ~/environment/openclaw-demo
python3 -m venv venv
source venv/bin/activate
pip install flask requests
Create the agent file openclaw_agent.py:
from flask import Flask, request, jsonify
import json, os, subprocess
from datetime import datetime
app = Flask(__name__)
TASKS_FILE = "tasks.json"
def load_tasks():
if os.path.exists(TASKS_FILE):
with open(TASKS_FILE) as f:
return json.load(f)
return []
def save_tasks(tasks):
with open(TASKS_FILE, "w") as f:
json.dump(tasks, f, indent=2)
def ask_ollama(prompt):
try:
result = subprocess.run(
["ollama", "run", "llama3.2", prompt],
capture_output=True, text=True, timeout=60
)
return result.stdout.strip()
except Exception as e:
return f"Error: {str(e)}"
@app.route("/task", methods=["POST"])
def add_task():
data = request.get_json()
if not data or "text" not in data:
return jsonify({"error": "Missing text field"}), 400
text = data["text"]
ai_summary = ask_ollama(
f"In one short sentence, rewrite this as a clear task: {text}. Reply with only the task sentence."
)
task = {
"id": len(load_tasks()) + 1,
"text": text,
"ai_summary": ai_summary,
"done": False,
"created_at": datetime.now().isoformat()
}
tasks = load_tasks()
tasks.append(task)
save_tasks(tasks)
return jsonify({"status": "added", "task": task})
@app.route("/tasks", methods=["GET"])
def get_tasks():
return jsonify(load_tasks())
@app.route("/health", methods=["GET"])
def health():
return jsonify({"status": "ok"})
if __name__ == "__main__":
app.run(host="0.0.0.0", port=3000, debug=False)
Step 5: Run the Agent
Terminal 1:
cd ~/environment/openclaw-demo
source venv/bin/activate
python openclaw_agent.py
You should see:
* Running on http://0.0.0.0:3000
Terminal 2 — Test it:
curl -s -X POST http://127.0.0.1:3000/task \
-H "Content-Type: application/json" \
-d '{"text": "Review pull requests before standup"}' | python -m json.tool
Expected response:
{
"status": "added",
"task": {
"id": 1,
"text": "Review pull requests before standup",
"ai_summary": "Review all open pull requests prior to the standup meeting.",
"done": false,
"created_at": "2026-04-26T10:00:00.000000"
}
}
Step 6: Check the Health Endpoint
curl -s http://127.0.0.1:3000/health
{"status": "ok"}
Common Issues & Fixes
| Problem | Fix |
|---|---|
Port 3000 already in use |
Run sudo fuser -k 3000/tcp
|
disk full error |
Expand EBS volume + run xfs_growfs
|
venv/bin/activate not found |
Run python3 -m venv venv first |
| Empty JSON response | Restart agent, check Ollama is running |
| Slow AI responses | Normal on CPU-only EC2 — upgrade instance type for speed |
What OpenClaw Gets Right
After building with OpenClaw, here is what stands out:
1. The privacy model is genuine. Nothing leaves your instance. No telemetry, no API calls to external services, no usage data. For anyone building tools that handle personal or sensitive information, this is a game changer.
2. The architecture is composable. Every skill is just an endpoint. Adding a new capability means adding a new route. This is the right level of abstraction — simple enough for beginners, flexible enough for complex workflows.
3. It works on modest hardware. No GPU required. A standard EC2 t3.medium instance runs LLaMA 3.2 perfectly well for personal use. The barrier to entry is genuinely low.
What's Next
Now that the foundation is running, here is what I plan to build on top of it:
- 📅
/reminderendpoint — schedule tasks with time-based alerts - 🏷️
/prioritizeendpoint — AI ranks your task list by urgency - 🌐 Web UI — a simple HTML frontend instead of CLI
- 📊
/summaryendpoint — daily digest of pending tasks
The beauty of OpenClaw is that each of these is just another Flask route + an Ollama prompt. The pattern scales naturally.
Final Thoughts
If you've been waiting for a reason to try self-hosted AI, OpenClaw is it. The setup is approachable, the architecture is clean, and the results are genuinely useful. You don't need a GPU, a paid API, or a powerful local machine — just an EC2 instance and curiosity.
Start small. Build one skill. See what clicks. Then build another.
👉 Full source code: github.com/MakendranG/openclaw-task-manager
ClawCon Michigan
I didn't attend ClawCon Michigan in person, but following the community online made it clear why this event exists. The people building with OpenClaw aren't just tinkering — they're making a genuine case that personal AI should be private, local, and owned by the user. That idea is worth celebrating in person, and I hope to join the next one. 🦞
Top comments (0)