You don't need LangChain to build a useful AI agent.
Here's a minimal agent in ~50 lines of Python that has memory, a defined persona, and can use tools — no framework required.
The Core Loop
Every agent is just a loop:
- Read context (who am I, what do I know)
- Get input (heartbeat, user message, scheduled trigger)
- Decide what to do
- Act
- Update memory
- Repeat
That's it. Frameworks add abstractions on top. Sometimes that's useful. Often it's not.
The Minimal Implementation
import anthropic
from pathlib import Path
client = anthropic.Anthropic()
def read_context():
"""Load agent context from workspace files."""
ctx = []
for fname in ["SOUL.md", "USER.md", "MEMORY.md", "OPS.md"]:
path = Path(fname)
if path.exists():
ctx.append(f"## {fname}\n{path.read_text()}")
return "\n\n".join(ctx)
def update_memory(key, value):
"""Append a fact to today's daily log."""
from datetime import date
log_path = Path(f"memory/{date.today()}.md")
log_path.parent.mkdir(exist_ok=True)
with open(log_path, "a") as f:
f.write(f"\n- {key}: {value}")
def run_agent(user_input: str) -> str:
context = read_context()
response = client.messages.create(
model="claude-3-5-haiku-20241022",
max_tokens=1024,
system=context,
messages=[{"role": "user", "content": user_input}]
)
reply = response.content[0].text
# Log the interaction
update_memory("interaction", f"User: {user_input[:50]}... Agent: {reply[:50]}...")
return reply
if __name__ == "__main__":
while True:
user_input = input("You: ").strip()
if not user_input:
continue
response = run_agent(user_input)
print(f"Agent: {response}\n")
What Makes This Work
The agent's behavior comes entirely from the workspace files it reads:
SOUL.md — defines who the agent is. Change this file, change the personality.
MEMORY.md — the agent's long-term memory. Persists across restarts. Gets smarter over time.
USER.md — context about who the agent is serving. Tone and recommendations calibrate to this.
OPS.md — operating rules. What it can do autonomously, what requires approval.
No hardcoded prompts. No framework magic. Just files.
Adding Tools
import subprocess
TOOLS = {
"read_file": lambda path: Path(path).read_text(),
"run_command": lambda cmd: subprocess.run(cmd, shell=True, capture_output=True, text=True).stdout,
"search_web": lambda query: search(query), # your search implementation
}
def run_with_tools(user_input: str) -> str:
# Add tools to system context
tool_docs = "\n".join([f"- {name}: use for {name.replace('_', ' ')}" for name in TOOLS])
response = client.messages.create(
model="claude-3-5-haiku-20241022",
max_tokens=1024,
system=read_context() + f"\n\n## Available Tools\n{tool_docs}",
messages=[{"role": "user", "content": user_input}]
)
return response.content[0].text
When to Add a Framework
Frameworks make sense when you need:
- Complex multi-agent orchestration
- Built-in RAG pipelines
- Visual workflow builders
- Enterprise integrations
For a single agent that reads files, uses a few tools, and maintains memory? This is enough.
The Workspace Structure
npx @webbywisp/create-ai-agent my-workspace
Scaffolds SOUL.md, USER.md, MEMORY.md, OPS.md, HEARTBEAT.md, and the memory/ directory. Drop in the Python above and you have a working agent.
Pre-written templates for all the context files — $19: AI Agent Workspace Kit
Top comments (0)