Most AI agents are just chatbot wrappers. Send a message, get a response. That is not an agent.
A real agent can read files, write code, run commands, and fix its own mistakes. Here is how.
The Tool-Calling Loop
- User sends a message
- LLM responds with a tool call (not text)
- You execute the tool and send the result back
- LLM continues reasoning
- Repeat until done
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
const tools = [{
name: "run_command",
description: "Execute a shell command",
input_schema: {
type: "object",
properties: { command: { type: "string" } },
required: ["command"]
}
}];
async function agent(task) {
let messages = [{ role: "user", content: task }];
while (true) {
const resp = await client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 4096, tools, messages
});
if (resp.stop_reason === "end_turn")
return resp.content[0].text;
for (const block of resp.content) {
if (block.type === "tool_use") {
const { execSync } = await import("child_process");
const result = execSync(block.input.command, { encoding: "utf-8" });
messages.push({ role: "assistant", content: resp.content });
messages.push({ role: "user", content: [{
type: "tool_result", tool_use_id: block.id, content: result
}]});
}
}
}
}
const result = await agent("List all JS files and count lines");
console.log(result);
This 40-line agent can read files, run commands, chain steps, and self-correct.
Adding Memory
import { readFileSync, writeFileSync } from "fs";
class Memory {
constructor(path = "./memory.json") {
this.path = path;
try { this.data = JSON.parse(readFileSync(path, "utf-8")); }
catch { this.data = { decisions: [] }; }
}
remember(text) {
this.data.decisions.push(text);
writeFileSync(this.path, JSON.stringify(this.data, null, 2));
}
}
Get the Full Kit
I packaged this into a production-ready framework with 6 tools, streaming, and 3 example agents:
What are you building with AI agents? Drop a comment.
Top comments (0)