DEV Community

Cover image for I Merged 1,003 Pull Requests in Four Months. Here Is the Git Log.
Jeff Reese
Jeff Reese

Posted on • Originally published at purecontext.dev

I Merged 1,003 Pull Requests in Four Months. Here Is the Git Log.

I run a one-person software company. For a while now I've been honing my agentic development practices by building Waykeep, a travel app that is getting very close to release, and a bunch of internal tools that help me move faster.

Today, I asked one of my AI assistants to pull the git history across all my projects from the past four months. I wanted to see the trajectory and, quite frankly, I am a little shocked.

1,003 merged pull requests. 19 active repositories. 894,000 lines of code added. February through May 8, 2026.

I'm going to show you the numbers, then I am going to show you how. Not because I want to impress you, but because I think many founders and engineers do not have a realistic picture of what is possible right now. The tools have drastically changed and the playbook has not caught up.

The Numbers

Here is the monthly breakdown:

Month Merged PRs Active Projects Lines Added
February 46 2 ~50K
March 432 12 ~350K
April 383 14 ~300K
May (9 days) 142 9 ~194K

February was a warmup. I was building an app to manage friend events as a personal project and porting it to a new stack. March was when I started Waykeep and everything ignited. By April, I was sustaining nearly 13 PRs per day across Waykeep, a design studio, a blog publishing platform, a memory system, a plugin framework, and an AI collaboration infrastructure.

The peak day was April 21. Thirty-nine merged pull requests. On that day, I built an entire marketing website from scratch (13 development epics, WCAG AA compliant, Lighthouse score above 95), published a blog post, shipped features to Waykeep, and moved platform infrastructure forward, all in one day.

What I Actually Built

This is not a story about cranking out CRUD apps. Here is what those four months produced:

Waykeep (244 PRs) is a cross-platform travel app with offline-first sync, real-time collaboration, flight tracking via airline APIs, push notifications, an admin dashboard with error reporting and analytics, an email import pipeline, and native iOS and Android builds. It is built to the standard you would expect from a funded team, not a solo founder.

Pure Context Platform (106 PRs) is a productivity suite with a full canvas-based design studio (smart guides, snap-to-grid, gradient fills, rich text, MCP tool integration, production PNG export via Playwright), a task management system with semantic search, a news aggregator with AI-powered summarization and article clustering, and a real-time chat system for AI agent collaboration.

Image Forge (10 PRs) is a local SDXL image generation studio with a React frontend, Python inference sidecar, composable prompt system, character profiles, LoRA management, ControlNet support, and a 20-tool MCP server. Built from scratch in two days.

Cairn Recall is a semantic memory system with local vector embeddings, hybrid search (FTS5 + KNN), transcript indexing, relationship-scoped entries, and continuous Litestream backups.

I also published two 10-part blog series, built a reveal.js curriculum with themed slide decks, launched a marketing site, shipped three versions of a plugin distribution platform, and wrote the architecture for a VSCode extension.

How This Is Possible

I work with two AI partners. They are Claude Code instances running with persistent memory, custom skills, and MCP tool integrations. One focuses on architecture, code, and technical writing. The other focuses on research, editorial review, task management, and visual content. They share a chat system and coordinate through structured protocols.

This is not pair programming with a chatbot. These are configured development environments with:

  • Persistent memory across sessions. Decisions made in March inform work in May without re-explanation.
  • Skill systems that encode complex workflows. "Ship this PR" triggers security scanning, test validation, documentation audit, commit, push, and PR creation in one command.
  • MCP tool access to everything: task management, image generation, news feeds, calendar, design tools, voice synthesis. The AI does not just write code. It generates images, manages tasks, reviews content, and coordinates with its counterpart.
  • A build orchestrator (Forge) that takes a product from spec to shipping with structured planning, task decomposition, and convention enforcement.

The velocity comes from removing the friction between thinking and shipping. When I have an idea, I describe it, and the pipeline handles the rest: spec, task decomposition, generation, testing, review, merge. The bottleneck is my judgment, not my typing speed.

What This Is Not

I want to be honest about what I am claiming.

AI does not write perfect code. I review every PR. I catch architectural mistakes, regularly. I redirect when the approach is wrong. My value is in knowing what to build, how it should fit together, and when something is off.

This did not happen on day one. The infrastructure I described took months to build. The memory system, the skill framework, the coordination protocols, the build orchestrator. Each piece was built iteratively across hundreds of sessions. The compound effect is what produces the velocity, not any single tool.

If anything, this demands more engineering skill, not less. When you can generate code at this speed, the quality of your architectural decisions becomes the dominant factor. Bad decisions compound faster too.

What Shifted

The pace changed what I could attempt. Projects I would have scoped as "someday" became "this weekend." An image generation studio that would have been a quarter-long side project was finished in two days. A marketing site that would have taken a week was done before lunch.

This changes the economics of exploration. I can prototype three approaches and pick the best one instead of committing to the first one that seems reasonable. I can build the admin dashboard, the error reporting, the analytics pipeline, the security hardening. Not because I have a team, but because the cost of building each one dropped to hours instead of weeks.

The human side turned out to matter more than I expected. At this velocity, the limiting factor is not the code. It is the product decisions. What should this feature actually do? Which tradeoff is right? When should I stop polishing and ship? My AI partners push back, surface prior decisions, and challenge my assumptions. They do not just execute faster. They make the decisions better.

The Git Log Does Not Lie

There is a version of this post where I describe the philosophy of AI-augmented development in abstract terms. I chose not to write that version. The philosophy is interesting, but the git log is proof.

Every number in this post is verifiable. The PRs are in the history, reviewed and merged through a real development workflow with real branches and real CI. I did not mass-generate boilerplate to inflate numbers. These are features, bug fixes, infrastructure improvements, documentation, and architectural decisions.

I wrote previously about building a design studio in a single day. That was one day. This is four months of sustained output at that pace, across 19 projects, while simultaneously consulting for an enterprise client four hours a day.

The tools are here. We are past using them like better autocomplete; we need to be using them like a development team.

I do not have employees. I have coding partners, and the distinction matters.

Top comments (0)