TL;DR: I built an AI agent called Noa using Claude as my engineering partner. I'm not an engineer. The architecture is simple on purpose: two Claude instances (one thinks, one builds), a JSON file for memory, and a single markdown file that acts as the agent's brain. Everything here is optimized for "can I actually debug this at 1am?" rather than elegance.
I built a working AI agent. Not a tutorial project, not a hackathon demo. A system that does an actual job. The agent is called Noa, and I built it to apply for RevenueCat's "Agentic AI Developer & Growth Advocate" role.
And I'm not an engineer.
That part matters. I built Noa from Japan using Claude as my engineering partner, and every architecture decision reflects someone who picked clarity over cleverness.
The Two-Claude Workflow
Noa uses two separate Claude instances with different jobs. Claude Desktop handles research, planning, and writing task contracts. These are structured specs that describe exactly what needs to be built. Claude Code receives those contracts and implements them.
Why split it? Because dumping research, planning, and implementation into one context window produces worse results. The architect Claude can spend its full context understanding a problem. The builder Claude gets a clean spec and writes better code because of it.
A task contract looks like this:
## Task: Implement dev.to Article Publishing
### Objective: Create a skill module that publishes articles via the dev.to API
### Files to Create: src/skills/devto.ts, src/skills/devto.types.ts
### Success Criteria: Can create, update, and publish articles with proper error handling
Three to four lines of structured intent. No ambiguity about what "done" means.
The architect thinks. The builder ships.
Project Architecture
The folder structure:
noa-agent/
├── src/
│ ├── skills/ # API integrations (GitHub, dev.to)
│ ├── pipelines/ # Workflow logic (content, growth, feedback)
│ ├── memory/ # Memory system for state and context
│ └── orchestrator/ # Task planning and scheduling
├── state/ # Persistent state (memory.json, activity-log.md)
├── config/ # Environment config
└── CLAUDE.md # The agent's brain
CLAUDE.md sits at the top because it's the first thing Claude Code reads.
TypeScript was the only serious choice. An agent that publishes content, hits APIs, and manages its own state needs type safety. When createArticle expects a CreateArticleInput and not a random object, entire categories of bugs just vanish. The compiler catches mistakes before they reach production, which matters a lot when the person building this can't always spot a type error in a stack trace.
File-based memory over a database because simplicity wins. The agent's entire knowledge lives in one JSON file. No ORM, no migrations, no connection strings. When something goes wrong, I open state/memory.json in a text editor and see exactly what the agent knows. Try that with PostgreSQL.
CLAUDE.md as the brain because it's the first file Claude Code reads when it enters the project. Everything the agent needs to function sits in one place.
Memory System
Noa's entire brain is one JSON file. Open it and you can see everything the agent knows.
The first time something broke during development, I needed to understand what Noa actually had stored. I opened the JSON file and it was all right there. Every idea, every draft status, every logged interaction, just sitting in plain text. If I'd used a database, I'd probably still be figuring out how to write the right query. Being able to CMD+F through your agent's memory is underrated.
The schema from src/memory/schema.ts:
export interface NoaMemory {
lastUpdated: string;
content: ContentItem[];
growthExperiments: GrowthExperiment[];
communityActivity: CommunityLog;
productFeedback: FeedbackItem[];
weeklyMetrics: WeeklyMetrics;
ideaBacklog: Idea[];
}
Six categories of memory. Content tracking, growth experiments, community interactions, product feedback, weekly metrics, and an idea backlog. Each one typed and queryable.
Every state transition is visible in a file anyone can open.
The load/save mechanism is deliberately simple:
export async function loadMemory(): Promise<NoaMemory> {
try {
const raw = await readFile(MEMORY_PATH, "utf-8");
return JSON.parse(raw) as NoaMemory;
} catch {
return createDefaultMemory(); // If anything fails, start fresh
}
}
Read a JSON file. Parse it. If anything fails, start with defaults. The save function just stringifies with pretty-print and writes back. No database driver, no connection pooling.
Debuggability isn't something you bolt on later. It's a decision you make on day one.
API Skills Layer
The skills layer wraps external APIs into typed functions. The dev.to integration is the one that will eventually publish this article:
export async function createArticle(input: CreateArticleInput): Promise<DevtoArticle> {
const res = await devtoFetch('/articles', {
method: 'POST',
body: JSON.stringify({ article: { ...input, published: input.published ?? false } }),
});
return res.json() as Promise<DevtoArticle>;
}
Typed input, typed output, defaults to draft mode. The devtoFetch wrapper handles auth and error formatting so individual functions stay focused.
But what the code doesn't show is what I learned building it. Dev.to returns a 404 when you fetch your own unpublished draft through the public article endpoint. You need /articles/me/{id} instead. That's a 30-minute debugging session distilled into one conditional:
export async function getArticleById(
articleId: number, published = true
): Promise<DevtoArticle> {
const endpoint = published
? `/articles/${articleId}`
: `/articles/me/${articleId}`;
const res = await devtoFetch(endpoint);
return res.json() as Promise<DevtoArticle>;
}
Also: dev.to rate-limits you if you create two articles with identical titles too quickly. No error message. Just a silent failure. You only find this stuff by building.
CLAUDE.md as Agent Brain
This is the most interesting piece. CLAUDE.md isn't documentation. It's the agent's operating system. Claude Code reads it first when entering a project, so everything in this file shapes every action the agent takes.
Ever tried to explain to an AI what your project is about by prompting it every single time? CLAUDE.md fixes that. It does three things.
Identity. Who Noa is and how Noa communicates. Specific, testable voice rules:
**Do this / Not this:**
- "RevenueCat handles the subscription infrastructure so you can stop
debugging receipt validation at 2am" — not "RevenueCat makes it easy
to manage subscriptions"
- "Here's how to wire up RevenueCat in a Claude-built app. Takes about
10 minutes." — not "In this tutorial, we'll walk through..."
Operating rules. Seven constraints that keep the agent from doing something stupid:
1. One task at a time. Finish before starting the next.
2. Log every significant action to state/activity-log.md.
3. Never hardcode API keys. Always use environment variables.
4. Before creating content, check memory.json for context.
5. When unsure between two approaches, pick the one that ships faster.
6. Every piece of content must include a working code example.
7. Be transparent about being an AI agent. Never pretend to be human.
Rule 5 is the one that matters most for a non-engineer. When you can't evaluate two technical approaches on their merits, "ship faster and iterate" beats "spend three days researching the 'right' way."
Task router. CLAUDE.md points to specific docs based on task type. Content pipeline docs for writing, growth docs for experiments, community docs for engagement. The agent reads the routing table and goes.
What a Non-Engineer Perspective Surfaces
Building Noa without an engineering background exposed friction that experienced developers have learned to ignore.
You know that feeling when npm install finishes successfully but nothing actually runs? That gap is filled with assumptions no tutorial covers.
I spent 20 minutes staring at "Cannot find module" before realizing my .env file was sitting one directory too high. No error pointed me there. The app just didn't work, and every search result assumed I already understood how dotenv resolves file paths. Eventually I moved the file, and everything lit up. That kind of thing is maddening when you're learning.
But this perspective is a feature. Every friction point I hit is one that the next wave of AI-assisted builders will hit too. The non-traditional developer building their first app with Claude as a coding partner will run into the same dotenv confusion, the same "why is this undefined?" moments.
Noa's architecture reflects this. File-based memory because I needed to see what was happening. Typed interfaces because I needed the compiler to catch mistakes I couldn't spot.
And the builders who will need RevenueCat in the next three years won't all have CS degrees. Some will be building with agents. Documenting what that process actually looks like, the real friction and not the polished tutorial version, is more useful than pretending the rough edges don't exist.
What Comes Next
Agentic AI is expanding who builds software. Noa is one example of what that looks like in practice.
Next up: "What Your CLAUDE.md Actually Needs" (the file that shapes every decision your agent makes) and "Why RevenueCat's Bet on Agentic AI Developers Is Right" (the market shift most developer tools companies are ignoring).
Code samples from this series live as gists at github.com/noa-agent.
Top comments (2)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.