Last quarter I convinced my team to let me spend two weeks doing nothing but deleting code. No new features. No bug fixes. Just deletion.
The results: build time dropped 35%. Test suite went from 14 minutes to 8. Bug reports in the following month fell by half. And three CVEs in unused dependencies disappeared because the dependencies disappeared.
Here's exactly how I did it, what I deleted, and how I convinced my manager it was worth two weeks of "zero feature output."
The Pitch That Worked
I didn't say "let me clean up the codebase." That's how you get one day, grudgingly.
I said: "We have 3 unpatched CVEs in dependencies used only by dead code. Our build is 35% slower than it needs to be. And every new hire spends their first week confused by systems we don't use anymore. I need two weeks to eliminate all of it."
Security risk + measurable cost + onboarding pain. Three things managers care about. "Clean code" is not one of them.
Day 1-3: The Audit
Before deleting anything, I needed to know what was actually dead. Intuition isn't good enough — I once "knew" a module was unused and deleted it. It was the payment reconciliation system that ran once a month. That was a bad week.
Finding Dead Code (Systematically)
Step 1: Static analysis
# TypeScript: find unused exports
npx ts-prune | grep -v '(used in module)' > unused-exports.txt
# Check for unused dependencies
npx depcheck > unused-deps.txt
# Find files with zero imports
find src -name '*.ts' | while read f; do
basename=$(basename "$f" .ts)
if ! grep -r "from.*$basename" src --include='*.ts' -q && \
! grep -r "import.*$basename" src --include='*.ts' -q; then
echo "ORPHAN: $f"
fi
done
Step 2: Runtime verification
Static analysis misses dynamically imported modules and config-referenced code. So I added logging:
// Temporary: track module usage in production
const MODULE_USAGE = new Map<string, number>();
export function trackModuleUsage(moduleName: string) {
MODULE_USAGE.set(moduleName, (MODULE_USAGE.get(moduleName) || 0) + 1);
}
// Added this to every suspect module's entry point
// Ran for one full week in production
// Exported results via admin endpoint
After one week: 14 modules showed zero hits. Three of them were entire feature areas — an old admin dashboard, a deprecated CSV export system, and an authentication method we'd migrated away from two years ago.
Step 3: Git archeology
# When was this file last meaningfully changed?
git log --oneline -5 -- src/legacy-admin/
# Answer: 18 months ago. And that change was updating
# a dependency version, not actual feature work.
If nobody's touched it in a year, and runtime telemetry shows zero usage, it's dead.
Day 4-8: The Cuts
Cut 1: The Deprecated Auth System (8,200 lines)
Two years ago we migrated from session-based auth to JWT. The old system was still in the codebase — fully functional, fully tested, completely unused.
Why it hadn't been deleted: Nobody wanted to be the one to break auth. The old code was a security blanket. "What if we need to roll back?"
Why I deleted it: We'd been on JWT for two years. The session-based system referenced an old Redis cluster we were paying $340/month for. And it had two unpatched CVEs because nobody was maintaining it.
The delete was one PR: 8,200 lines removed, 12 test files removed, 3 dependencies removed, Redis cluster decommissioned.
Merged on Tuesday. Nothing broke. $340/month saved.
Cut 2: The "Smart" Abstraction Layer (4,100 lines)
Someone in 2022 built a "universal data access layer" that could theoretically work with PostgreSQL, MongoDB, and DynamoDB. In practice, we only used PostgreSQL. The abstraction added:
- A
DataProviderinterface with 47 methods - A
QueryBuilderthat was harder to use than raw SQL - A
ConnectionManagerthat wrapped pg Pool with... nothing useful - 2,100 lines of tests testing the abstraction, not the actual queries
The pattern I see constantly: an abstraction built for a future that never came. Three database engines, but we only ever used one.
I replaced the entire layer with direct pg calls wrapped in a thin repository pattern. Same functionality. 900 lines instead of 4,100.
Cut 3: Feature-Flagged Code That Was Permanently Off (6,800 lines)
We had 23 feature flags. Seven of them had been false in production for over a year. The code behind those flags was accumulating maintenance cost without providing any value.
// This was in production for 14 months:
if (featureFlags.isEnabled('new-checkout-flow')) {
// 1,200 lines of a checkout flow we decided not to ship
// Still being compiled, still being tested, still causing
// merge conflicts when someone touched the checkout module
}
I deleted the code behind all seven permanently-off flags. Kept the flag system, removed the dead branches.
Cut 4: Commented-Out Code and TODO Graveyards (2,400 lines)
Every codebase has them. Code that someone commented out "just in case." TODOs from 2023. Debug logging that was never removed.
// TODO: optimize this query (added 2023-03-14)
// TODO: handle edge case when user has no email (added 2022-11-08)
// TODO: refactor this entire module (added 2022-06-01)
// function oldPricingCalculation(user) {
// // 200 lines of commented-out code from the old pricing model
// // "keeping this in case we need to reference it"
// // (we never did)
// }
Git. Has. History. Delete it.
Cut 5: Tests That Tested Nothing (3,500 lines)
The most controversial cut. Some tests looked like they were providing value but weren't:
describe('UserService', () => {
it('should create a user', () => {
const user = new User({ name: 'Test' });
expect(user).toBeDefined(); // tests the constructor, not your code
expect(user.name).toBe('Test'); // tests object assignment, not logic
});
it('should handle errors', () => {
// This test mocks everything, then verifies the mocks were called.
// It tests that your test setup works, not that your code works.
const mockDb = { save: jest.fn().mockResolvedValue({ id: 1 }) };
const service = new UserService(mockDb);
await service.create({ name: 'Test' });
expect(mockDb.save).toHaveBeenCalledWith({ name: 'Test' });
// Congrats, you tested that jest.fn() works.
});
});
I replaced many of these with integration tests that actually catch bugs — tests that hit a real database, test real error scenarios, and verify actual behavior. Fewer tests, but each one earns its keep.
Day 9-10: The Verification
After all deletions:
- Full test suite: all passing (the tests I removed weren't catching anything anyway)
- Build time: 14 min → 8.5 min (35% faster)
- Dependencies: 127 → 98 (29 removed)
- Bundle size: down 22%
- Known CVEs: 3 → 0 (all in deleted dependencies)
I deployed on a Friday. (Yes, on purpose. If something broke, I wanted to know immediately while I still remembered every change.)
Nothing broke.
How to Start Your Own Deletion Sprint
You don't need two weeks. Start with an afternoon:
Run
depcheckor equivalent. Delete unused dependencies. This is the safest, highest-impact first step.Search for feature flags. Any flag that's been off for 60+ days — delete the code behind it.
Find the oldest files.
git log --diff-filter=M --format='%ai' -- <file> | head -1— if the last meaningful edit was 12+ months ago, investigate.Remove commented-out code. All of it. Today. Git has the history.
Audit your test suite. If a test has never failed in its entire existence and doesn't test complex logic, it might be testing nothing.
The Mindset
Every line of code is a liability. Not an asset — a liability.
It must be read by the next person. It must be maintained when dependencies update. It must be compiled. It must be tested. It can contain bugs. It increases the surface area for security vulnerabilities.
The best code is no code. The second best code is code you just deleted.
What's the biggest deletion win you've ever had? Stories welcome in the comments.
💡 Want to get more out of AI coding tools? I put together an AI Coding Prompts Pack — 50+ battle-tested prompts, 6 Cursor rules files, and Claude Code workflow templates. $9 with lifetime updates.
Top comments (0)