DEV Community

Cover image for The Full Audit: What a 9-Project Microservices Platform Looks Like When 78% of the Code is AI-Generated
Hari Venkata Krishna Kotha
Hari Venkata Krishna Kotha

Posted on

The Full Audit: What a 9-Project Microservices Platform Looks Like When 78% of the Code is AI-Generated

I spent 7 weeks building ... then several more weeks auditing, documenting, and hardening DesiCorner - a production-grade Indian restaurant e-commerce platform with 9 .NET and Angular projects, 5 databases, and a full Angular frontend. Claude Code wrote 78% of the code. I wrote 9%. Auto-generated tooling (EF Core migrations, Angular CLI scaffolding, package configs) handled the remaining 13%.

I tracked everything. Every commit, every bug, every file. This is the full audit.

What I Built

DesiCorner is an Indian restaurant ordering platform. Not a tutorial project - a full-featured e-commerce system with authenticated and guest checkout, Stripe payments, an admin dashboard with analytics, product reviews with voting, coupon management, and delivery/pickup order types.

The tech stack:

Backend: ASP.NET Core 8 across 8 .NET projects - AuthServer (OpenIddict OAuth 2.0), API Gateway (YARP), ProductAPI, CartAPI, OrderAPI, PaymentAPI (Stripe), a shared Contracts library (41 DTOs across 9 subdomains), and a MessageBus abstraction layer (Redis caching, Azure Service Bus scaffolded).

Frontend: Angular 20 with standalone components, NgRx state management, OAuth 2.0 Authorization Code + PKCE flow, and Stripe Elements for PCI-compliant payment forms.

Infrastructure: 5 separate SQL Server databases (one per service), Redis for distributed caching/sessions/rate limiting, and a branch-per-feature Git workflow with 68 commits across 15 branches and 22 merged PRs.

The architecture:

Architecture

Every project has its own README with Mermaid diagrams documenting the actual API flows verified against source code. Each microservice gets its own database and responsibility boundary.

Authentication uses OAuth 2.0 Authorization Code + PKCE - the Angular SPA never touches a client secret:

OAuth 2.0 Authorization Code + PKCE

No client secret in the browser. No password sent to the token endpoint. The code_verifier proves the token request came from the same client that started the flow.

The Numbers

Here's the part that matters. I audited the entire codebase commit-by-commit and produced a file-level attribution of who wrote what:

Category Me Claude
Project vision and concept 100% 0%
Architecture decisions 70% 30%
Technology selection 100% 0%
Backend model definitions (field choices) 60% 40%
Backend service/controller code 10% 90%
Angular scaffold and components 5% 95%
Configuration values (appsettings) 100% 0%
Bug identification 90% 10%
Bug resolution code 40% 60%
Security management 100% 0%
Git workflow (branching, PRs) 100% 0%
Testing and validation 100% 0%
Product images and assets 100% 0%
Documentation (READMEs, diagrams) 30% 70%

By raw line count: Claude generated roughly 38,000 lines (78%), I wrote about 4,500 lines (9%), and auto-generated tooling produced roughly 6,000 lines (13%).

The attribution methodology: commits with thousands of well-structured lines in a single commit strongly suggest AI generation. Small, targeted 2-10 line fixes with debugging context suggest human authorship. The .claude/settings.local.json file first appeared on Dec 5, 2025, confirming Claude Code usage from that date. Earlier attributions are inferred from these patterns.

Look at where the 100%-me rows cluster: vision, technology selection, configuration, security, git workflow, testing. Now look at where Claude dominates: service code, Angular components, documentation generation. The pattern is clear - I was the architect and Claude was the builder.

The Bugs That Proved the Point

Twelve bugs emerged during development. I identified eleven of them. Here are the three that taught me the most.

Bug 1: The JWT Remaster (November 12-13, 2025)

JWT tokens from the AuthServer were being rejected by ProductAPI when routed through the Gateway. Everything looked correct on the surface. It took two days to untangle three separate issues hiding behind the same 401 response.

Here's the token flow -- every arrow was a potential failure point:

Token Flow

Audience mismatch. The AuthServer issued tokens with audience desicorner-api, but ProductAPI validated against DesiCorner.ProductAPI. Different strings, same intent, total failure. Fix: align JwtSettings:Audience across all services.

Signing key conflict. ProductAPI was doing manual symmetric key validation, but the AuthServer was using OpenIddict's ephemeral signing keys. They'd never match. Fix: switch ProductAPI from hardcoded key validation to auto-fetching JWKS from the AuthServer's discovery endpoint.

CORS trailing slash. The Gateway's CORS policy name was "Angular" in one place and "desicorner-angular" in another. URLs had inconsistent trailing slashes between services. Fix: standardize naming and URL formats.

Three bugs, three different root causes, one symptom. I diagnosed all three through token validation logs and systematic elimination. Claude helped implement the JWKS auto-fetch after I identified what needed to change.

This is the kind of debugging where you can't just paste an error message into an AI and get an answer. The error message was the same for all three issues: 401 Unauthorized. The diagnosis required understanding how tokens flow across service boundaries, which configuration values matter at each hop, and the difference between OpenIddict's signing behavior and standard symmetric JWT validation.

Bug 2: Stripe Secret Key Exposure (December 5, 2025)

During the Stripe payment integration, I committed a live Stripe secret key to source control. I caught it within minutes, reverted the commit immediately, and re-committed with placeholder values.

The lesson isn't that I made the mistake - everyone has committed a secret at some point. The lesson is that security awareness during development is a human responsibility. You have to know what a secret key looks like, understand the implications of exposure, and react immediately. Yes, tools like GitGuardian and GitHub's push protection can catch these automatically - but the instinct to check before pushing, and the speed to react when something slips through, still matters.

Bug 3: The Admin Dashboard Cascade (December 18-23, 2025)

Every single admin dashboard API call returned 401 or 403. The first fix attempt on Dec 19 adjusted auth attributes - it didn't fully resolve the issue. The final fix on Dec 23 touched 23 files across 3 services because the root cause was actually four interrelated problems:

  1. Admin role claim wasn't properly included in JWT tokens from the AuthServer
  2. CartAPI was completely missing JWT validation configuration
  3. The Order model was missing an OrderType field, causing analytics queries to fail
  4. Delivery address fields were required but should be optional for pickup orders

I identified all four root causes through systematic debugging. Claude implemented the fixes after I mapped out what was broken and why. This is the kind of multi-service cascade failure where you need to understand how the entire system connects - not just the service throwing the error.

The FinTrack Contrast

To test the other end of the spectrum, I also had Claude Code build a completely separate project: a 5,597-line single-file HTML personal finance tracker. I provided product requirements and feature specs. Claude wrote all the code in about a week.

It ran. It looked right. But features had subtle issues I had to catch and send back for correction. The same pattern happened repeatedly on DesiCorner - AI-generated code that works on the surface but needs a human to validate the actual behavior against the intended requirements.

The difference between the two projects: I can defend every architectural decision in DesiCorner. I can explain why YARP instead of Ocelot, why OpenIddict instead of IdentityServer, why separate databases per microservice instead of a shared database. I can walk through every bug and explain how I traced the root cause.

For FinTrack, I can explain what it does and what the requirements were. But I can't defend the code decisions because I didn't make them. That's the difference between being an engineer and being a product manager who uses AI tools.

What I Learned

The skills that carried this project:

Architecture - deciding which services to build, how they communicate, which technologies fit, and where the boundaries should be. Claude could suggest options when asked. But evaluating tradeoffs against my specific requirements and committing to a direction - that was mine.

Debugging distributed systems - tracing failures across service boundaries, reading token validation logs, understanding how configuration values propagate through a microservices system. The JWT Remaster bug would have been trivial in a monolith. In a distributed system with an API Gateway, an AuthServer, and downstream services each with their own JWT validation config, it required understanding the full request lifecycle.

Security awareness - knowing what credentials look like, reacting to exposure, managing secrets across 5+ configuration files, understanding OAuth 2.0 flows well enough to spot misconfiguration.

Validation - not trusting that "it runs" means "it's correct." This applies equally to AI-generated code and to your own code, but the failure mode is different with AI. AI-generated code often fails in ways that look right at first glance.

Domain knowledge - knowing that an Indian restaurant platform needs dietary flags (vegetarian, vegan, gluten-free), spice levels, allergen tracking, and that pickup orders shouldn't require a delivery address. Claude couldn't infer these requirements. I had to specify them:

// Product.cs - domain fields I specified, Claude implemented
public bool IsVegetarian { get; set; }
public bool IsVegan { get; set; }
public bool IsSpicy { get; set; }
public int SpiceLevel { get; set; }        // 0-5 heat scale
public string? Allergens { get; set; }      // nuts, dairy, gluten
public int PreparationTime { get; set; }    // minutes
Enter fullscreen mode Exit fullscreen mode

These six fields represent domain knowledge that no AI would infer from "build an Indian restaurant platform." The SpiceLevel scale, the nullable Allergens as a comma-separated string, the PreparationTime default of 15 minutes - every field choice came from understanding the domain.

The Full Report

I wrote a 2,000-line development report that documents every commit, every file-level attribution, every bug with its resolution, and the complete contribution breakdown. Full transparency on who wrote what and why.

The repo, including 10 per-project READMEs with Mermaid architecture diagrams:

github.com/HariVenkataKrishnaKotha/DesiCorner

The Takeaway

AI wrote 78% of this project's code. That percentage will probably go higher on my next project. The question isn't whether AI can generate code - it obviously can, at scale, and it's getting better.

The question is whether you can architect a system, debug it when it breaks across service boundaries, catch what the AI missed, and take ownership of decisions that have downstream consequences. Those skills aren't about typing speed. They're about engineering judgment.

The value isn't in the code anymore. It's in everything around the code.


What's been your experience with AI coding tools on non-trivial projects? I'm especially curious about debugging stories - the moments where AI-generated code failed in ways that required real engineering to fix. Drop a comment or find me on LinkedIn.

Top comments (0)