DEV Community

Cover image for We Deleted 5,600 Lines of Code with Claude (and Found 1 Bug)
Travis Wilson
Travis Wilson

Posted on

We Deleted 5,600 Lines of Code with Claude (and Found 1 Bug)

I spent weeks building a provider registry system for my data pipeline platform. Sources, connection templates, database seeds, UUID mappings, capability checks, service arrays.

Then I deleted almost all of it.

194 files changed, 8,798 insertions(+), 14,411 deletions(-)
Enter fullscreen mode Exit fullscreen mode

Net result: ~5,600 lines of code removed. One bug found after the full rewrite.

Here's what happened.


The Over-Engineered System I Built

I needed to manage providers (Google Cloud, AWS, Postgres) and their services (BigQuery, Firestore, DynamoDB). Seemed simple enough.

But I built this:

Location What It Stored
seeds/sources_seed.go Source entities with services arrays
seeds/templates_seed.go ConnectionTemplate entities with schemas
seeds/constants.go Hardcoded UUIDs
frontend/components/connections/index.ts UUID → Component mapping
frontend/components/connections/*.tsx Hardcoded selectedServices arrays
frontend/lib/services/google-cloud-capabilities.ts OAuth scopes per service
Database (Firestore) Sources and ConnectionTemplates collections

8+ files defining the same information in different ways.

Want to add a new provider? Touch 8 files. Want to add a service to an existing provider? 6 files. Want to understand how it all fits together? Good luck.

I told myself this was "flexible" and "extensible."


The Realization

Then I actually thought about it:

Nothing was dynamic. OAuth scopes are locked at the OAuth app level. You can't grant different scopes per connection - they're baked into the OAuth consent screen.

There was no use case for partial service access. "This Google Cloud connection has BigQuery but not Firestore" - when would that ever happen? A connection is just credentials. If those credentials can't access BigQuery, the API returns 401. Done.

I was maintaining complexity for flexibility I'd never use.

The "sophisticated" architecture was solving a problem that didn't exist.


The Replacement: 20 Lines

I had no users yet. No backward compatibility concerns. No data to migrate.

So instead of refactoring, I asked: What if I just deleted everything and replaced it with a single config file?

// frontend/lib/providers.ts
export const PROVIDERS = {
  'google-cloud': {
    name: 'Google Cloud Platform',
    auth: 'oauth2',
    services: ['bigquery', 'firestore', 'gcs', 'pubsub'],
  },
  'aws': {
    name: 'Amazon Web Services',
    auth: 'iam',
    services: ['dynamodb'],
  },
  'postgres': {
    name: 'PostgreSQL',
    auth: 'database',
    services: ['postgres'],
  },
} as const

export type ProviderId = keyof typeof PROVIDERS
Enter fullscreen mode Exit fullscreen mode

That's it. ~20 lines replacing database tables, seeds, UUIDs, capability files, and registries.

The connection model went from this:

// Before: 6 fields, UUID lookups, redundant data
Connection = {
  id: 'abc123',
  sourceId: 'c7b3d8e9-5f2a-4b1c-9d6e-8a3b5c7d9e1f',  // UUID lookup
  templateId: 'template-google-cloud-oauth2',         // Another UUID
  services: ['bigquery', 'firestore'],                // Redundant
  connectionConfig: {
    selectedServices: ['bigquery', 'firestore'],      // Duplicate
    projectId: 'my-project',
  },
  credentials: {...}
}
Enter fullscreen mode Exit fullscreen mode

To this:

// After: 4 fields, string ID, no redundancy
Connection = {
  id: 'abc123',
  providerId: 'google-cloud',           // Just the key from PROVIDERS
  config: { projectId: 'my-project' },  // Provider-specific config
  credentials: {...}
}
Enter fullscreen mode Exit fullscreen mode

How Claude Made This Possible

This wasn't "let Claude write some code." This was AI-assisted architecture surgery.

1. Mapping the Blast Radius

I gave Claude the codebase context and asked it to find every file that referenced the old system. It identified:

  • All imports of the old types
  • All usages of the UUID constants
  • Frontend components using sourceId or templateId
  • Test files that would need updates
  • The order of operations to avoid breaking intermediate states

2. Systematic Execution

194 files is a lot to change by hand. More importantly, it's a lot to change correctly. Claude worked through them methodically:

Backend (Go):
- connections/model.go    → Remove Services, TemplateId; add ProviderId
- connections/service.go  → Update creation/validation logic
- connections/handler.go  → Update API request/response
- connections/dao.go      → Update Firestore queries
- router.go               → Remove /v1/sources/* routes
- Delete entire sources/ package (9 files)
- Delete entire connectiontemplate/ package (10 files)
- Delete seeds/sources_seed.go, templates_seed.go

Frontend (TypeScript):
- lib/providers.ts        → New static config (the 20 lines)
- hooks/use-source.ts     → Deleted
- hooks/use-connection-template.ts → Deleted
- services/source-service.ts → Deleted
- services/google-cloud-capabilities.ts → Deleted
- Update 40+ component files using the old types
Enter fullscreen mode Exit fullscreen mode

3. Updating Tests in Lockstep

This is the key: we didn't delete tests, we updated them.

Every service change came with corresponding test updates. When we deleted the sources/ package, we also deleted its tests. When we simplified the connection model, we updated the connection tests.

The test suite stayed green throughout the refactor.


Why Only 1 Bug?

After changing 194 files, we found exactly one bug in end-to-end testing.

That's not luck. That's what happens when:

1. You have comprehensive test coverage.

Tests catch regressions immediately. When I changed the connection model, tests told me exactly which handlers and services needed updates.

2. You refactor with tests, not after.

Every change included its test updates in the same commit. Tests weren't an afterthought.

3. AI helps you be systematic.

Claude doesn't forget to update a file in some distant corner of the codebase. It doesn't get tired after file 80 and start making mistakes.

4. You have a clear architectural vision.

I wrote a detailed plan document before touching code. The target state was unambiguous: one config file, string-based provider IDs, no services arrays on connections.


What I Learned

Complexity is a choice

I built the complex system. Nobody forced me to create UUID-based lookups and database seeds for static data. I did that because it felt "proper."

Sometimes the proper solution is a 20-line config file.

AI is best for architecture, not just autocomplete

The value wasn't "Claude wrote code faster." The value was:

  • Claude helped identify all the tentacles of the old system
  • Claude maintained context across 194 files
  • Claude was systematic where I would have gotten tired

Well-tested code enables fearless deletion

I could mass-delete code because I trusted my tests. Every deletion was validated. No "I think this is safe" - either tests passed or they didn't.

No users = no excuses

Having no users yet meant I had no excuse to keep complexity around. No backward compatibility. No migration scripts. Just delete and move on.

If you're early stage and carrying technical debt, now is the cheapest time to fix it.


The New Developer Experience

Adding a new provider:

  1. Add entry to PROVIDERS config (1 line)
  2. Create connection form component
  3. Create service config components
  4. Create backend handlers

Adding a service to existing provider:

  1. Add to PROVIDERS[providerId].services array (1 line)
  2. Create service config components
  3. Create backend handlers

No seeds. No migrations. No UUIDs. No template entities.


Try It Yourself

If you're staring at a system that feels heavier than it needs to be:

  1. Ask what problem it's solving. Is that problem real or hypothetical?
  2. Check if anything is actually dynamic. If the "flexible" parts never flex, they're just complexity.
  3. If you have good tests, trust them. They'll catch your mistakes.
  4. Use AI to map the blast radius. It's better at finding all the references than you are.

Sometimes the answer is mass deletion.


Have you ever deleted a system you built because you realized it was over-engineered? What helped you make that call?

Top comments (0)