I started building Timberlogs on January 6, 2026. As I write this, it's March 9 — just over two months later. In that time the repository has accumulated 838 commits, shipped 61 versioned releases, and grown from a blank TypeScript project into a full-stack logging SaaS with a Cloudflare Workers backend, a real-time dashboard, TypeScript and Python SDKs, a CLI, an AI log assistant, and a landing page.
I built most of it with Claude Code — Anthropic's AI coding tool — and I want to talk honestly about what that actually means.
What is Timberlogs?
Timberlogs is a structured logging service for modern applications. The pitch is simple: send logs from any language via a REST API or drop-in SDK, then search, filter, and trace them from a real-time dashboard at app.timberlogs.dev.
It sits in the same space as Datadog Logs, Logtail, and Axiom — but built for developers who want something that just works, without the enterprise pricing or configuration overhead. TypeScript-first, globally distributed via Cloudflare's edge network, up and running in under five minutes.
The technical stack ended up being: Ingest backend on Cloudflare Workers with D1, R2, and KV; a Vite + React dashboard with Clerk authentication and Convex for real-time data; an OpenRouter-powered AI assistant that queries your live logs in natural language; TypeScript and Python SDKs; a CLI; and an Astro landing page. None of that was planned upfront. It emerged from building.
How I Used AI to Build It
I want to be direct about something that tends to get lost in the discourse around AI coding tools: the AI doesn't build products. Engineers build products. The AI writes code faster.
That distinction sounds obvious, but it's worth making clearly because the narrative around tools like Claude Code often collapses into one of two extremes — either "AI can build anything" or "AI-generated code is garbage." Both are wrong, and both miss what's actually happening when an experienced engineer uses these tools seriously.
I'm a software engineer with over a decade of experience. I've shipped production systems, managed complexity, made architectural decisions, and debugged things that would make a junior developer cry. That background is not incidental to how well Claude Code worked for me — it is the reason it worked.
The engineer steers. The AI drives.
When I opened a session with Claude Code, I wasn't handing the wheel to a robot. I was describing a destination and a set of constraints to a very fast, very capable junior engineer who knows every library and can write boilerplate without complaining.
The value isn't that the AI writes better code than me. Sometimes it does, sometimes it doesn't. The value is that I can hold the architecture in my head, make the meaningful decisions, and offload the implementation work — the scaffolding, the edge cases, the repetitive CRUD, the CSS, the test setup — to something that can produce a working draft in seconds.
A concrete example: the AI log assistant inside Timberlogs. I knew what I wanted — a chat interface that could query real log data and respond in natural language. I knew which model provider I wanted to use, what the streaming architecture should look like, how the Convex persistence layer should work, and how the tool calling schema needed to be structured for accurate log retrieval. I made every one of those decisions. Claude Code wrote the implementation. We went from concept to working prototype in a single session.
What I actually did
Every feature in Timberlogs went through the same loop: I decided what to build and why. I thought through the architecture and any non-obvious constraints. I told Claude Code what I needed with enough precision that the output would be useful. I reviewed the result, caught issues, and steered corrections. I made judgment calls about what was good enough and what needed more work.
The commits are mine. The product decisions are mine. The taste — what makes the thing feel right, what gets cut, what gets polished — is mine. Claude Code wrote a lot of the code that implements those decisions.
What the AI is genuinely good at
Boilerplate and scaffolding. Setting up a new Cloudflare Worker with D1 bindings, writing a new Convex mutation, creating a TypeScript SDK with proper error handling and retry logic — things that are straightforward but tedious. The AI does these well and fast.
Following established patterns. Once I had a codebase with consistent conventions, Claude Code was good at understanding those conventions and extending them. New components looked like existing components. New API endpoints followed the same shape as existing ones.
First drafts. Almost every file started as an AI-generated first draft that I then reviewed, modified, and approved. The quality of those drafts was high enough that many needed minimal changes. Some needed significant rework. None were shipped without review.
Refactoring. Just this week I had Claude Code split a 1,367-line component into seven focused files and custom hooks. It did it correctly, maintained all the existing behaviour, and passed typecheck on the first attempt. That would have taken me an afternoon to do manually.
What the AI is not good at
Long-range architectural thinking. The AI works session by session. It doesn't accumulate a picture of your codebase over months the way you do. Every significant architectural decision — how data flows through the system, what to build on Cloudflare vs. what lives in Convex, how to structure the multi-tenant data model — required me to think it through and give precise direction.
Knowing when to stop. Without guidance, the AI adds things. It wraps everything in abstractions. It adds error handling for scenarios that can't happen. I had to be explicit: keep it simple, don't add what wasn't asked for, trust the framework.
Taste. The AI can produce code that works. It can't tell you whether you should build the feature in the first place, whether the UX makes sense, or whether the abstraction is worth the complexity it introduces. That requires judgment that comes from experience.
What 838 Commits Actually Represents
A number like 838 commits in two months could be read as evidence that AI just generates infinite code. I'd push back on that.
Those commits represent 838 discrete decisions: this feature is ready, this fix is correct, this refactor is safe to ship. Each one went through a git diff that I reviewed. Each one has a message that explains what changed and why. The codebase has gone through 61 versioned releases because I was maintaining a real product with a real release process — not accumulating changes in a branch.
The velocity was high because I wasn't doing the typing. But the judgment was mine throughout.
Who This Workflow Works For
I want to be honest that this workflow is not equally effective for everyone.
If you are an experienced engineer, AI coding tools amplify what you already bring. You can move fast because you know what fast looks like. You can catch mistakes because you understand the domain. You can steer because you know where you're going.
If you are earlier in your career, the risks are different. It is possible to produce code that looks correct, passes typecheck, and ships to production — and still not understand what it does or why. That's a debt that compounds. AI tools used without enough grounding can slow learning rather than accelerate it.
Timberlogs exists because I had enough context to use the tool well. That context came from years of building things the slow way first.
The Honest Summary
Two months. 838 commits. A real, working SaaS product — live at timberlogs.dev.
Claude Code was a significant part of how I got there. So was knowing exactly what I wanted to build, having strong opinions about how it should work, reviewing everything it produced, and making every call that actually mattered.
The tool is genuinely powerful. What it amplifies, though, is the engineer using it — not a substitute for one.
Timberlogs is available at timberlogs.dev. If you're looking for simple, structured logging without the overhead, it's free to get started.
Top comments (0)