We are currently living through the greatest inflation of software in history.
With the AI tools we have available in 2026, a Junior Developer can...
For further actions, you may consider blocking this person and/or reporting abuse
I decided to refactor my code of 5,327 lines in main.py + 344 + 665 additional scripts + 7 + 358 + 226 + 378 lines of shaders. I feel like I haven't slept in a week. Seriously, I just tried using
__init__.pyand I don't even understand its purpose, other than burning my patience...?!Refactoring is an art. And apparently, the way I format the structure, it's safe to say I'm a bad architect. No. As Paolo Veronese said, "I'm an artist, I see it that way." And I'll say, "I'm a vibe coder, and I don't need architectural skills, I just need it to work." Maybe that's bad, but at least it gives me impetus to create something bigger (I hope).
@embernoglow Vibe coding builds prototypes. Deletion builds cathedrals.
The arc:
You're living the post exactly and great to see you finally did prune your codebase. BTW your sdf repo is just gold :) Love to see more open source contributions like this.
Thank you!
5,327 lines in main.py? That is not a script, that is a novel. Respect.
I love the term "Vibe Coder". Honestly, getting it to work is the hard part. Architecture is just what we do later when we are tired of scrolling up and down 5,000 lines to find a variable.
Don't let init.py break you. Python packaging confuses people with 10 years of experience. Go get some sleep.
Thanks!
This is not a 2026 skill, writing as less code as possible has always been the goal of programming.
Libraries and frameworks should be a considered solution in the application, not a starting point.
With AI the libraries and frameworks could become a thing of the past because it can produce custom code faster than setting up a framework.
In the AI experiments I'm doing I'm using a router for a front controller, the base of most web frameworks, to check how far I can push AI.
A router can come with one or more ways to identify the routes; config, a builder pattern, attributes. But an application will have a single way of router identification to make it predictable.
So you don't need the other options, which means you can remove abstractions that are made to make the other options possible.
Because the base isn't a framework anymore, the application doesn't force you to use the opinion of the framework if it has one.
The question is not can you delete lines, which is the worst metric you can use to measure code quality.
The question is how much code are you willing to maintain.
You nailed it with your last sentence. How much code are we willing to maintain is the only metric that really matters.
Your point about AI replacing frameworks is incredibly interesting. We used to accept framework bloat because we needed the development speed. Now that AI can write a custom router just as fast, we do not need to import all that extra baggage anymore.
The challenge now is just making sure the AI does not invent its own bloated abstractions when we ask it to build those custom solutions. Keeping it minimal is definitely a timeless skill.
You can instruct AI to write the simplest code, and the output always has to be reviewed.
The death of frameworks with AI is that they are opinionated, and everyone has their own opinion.
Like you mentioned we accepted the framework opinion, because it brought us speed.
Now we can start with the libraries and custom code, and let AI create the glue code in the same time that is needed to setup a framework.
That is why I took the router component as a base to experiment with AI. Does it reliably alters the router, Does it do a bit weird stuff like using FFI code for the route matching. Does it create the base for a maintainable application. Does it write tests an documentation that requires minor changes. And so on.
I'm late starting with the experiments but the tools we have today are much easier and diverse, and we also understand the pros and cons of AI use better. Most of the time it pays off not to dive in from the start.
The maintenance cost point is the one that doesn't get enough attention. We benchmarked AI-generated code recently and found that 65-75% of functions had security vulnerabilities — so it's not just that AI creates more code to maintain, it creates more risk to maintain. Every line you don't delete is a line you're implicitly agreeing to secure, and most teams aren't doing that audit. The "code janitor" framing is spot on. @the_nortern_dev
That 65-75% stat is terrifying, but honestly not surprising.
I absolutely love how you phrased this: "Every line you don't delete is a line you're implicitly agreeing to secure." That should be printed on every CTO's wall.
It reframes the "Code Janitor" from someone who just cleans up messes to someone who is actively reducing the attack surface. In 2026, deleting code is arguably the most effective security patch you can apply.
Thanks for adding that data point to the discussion.
That 65-75% stat is terrifying, but honestly not surprising.
I absolutely love how you phrased this: "Every line you don't delete is a line you're implicitly agreeing to secure." That should be printed on every CTO's wall.
It reframes the "Code Janitor" from someone who just cleans up messes to someone who is actively reducing the attack surface. In 2026, deleting code is arguably the most effective security patch you can apply.
Thanks for adding that data point to the discussion.
Great minds think alike! 😄 I actually published an article on the EXACT same topic yesterday — same title and everything!
Your take is more philosophical and really captures the 'inflation of software' idea beautifully. That line about junior developers generating more code in an afternoon than seniors used to write in a month? Chef's kiss. 👨🍳
My version took a more beginner-friendly, practical approach with before/after code examples and a 'Code Delete Challenge' for readers. Seems like we both recognized this is THE skill for 2026!
Curious — what's the one piece of code YOU'VE deleted that made the biggest difference in your career? Would love to hear your story!
It is funny how these ideas surface across the industry at the exact same time. It just proves how pressing this issue actually is right now. I will definitely check out your practical approach, having concrete code examples is always a great addition.
To answer your question: the biggest difference for me was ripping out a massive, over-engineered state management library in a recent project and replacing it with a simple, standard data flow.
Deleting that entire abstraction layer improved the actual performance of the app more than any new feature I had added that year. It was the moment it really clicked for me that complex architecture is often just a symptom of not fully understanding the core problem.
Great minds indeed. 🙂
Absolutely well said! When I started my career, I used to think that the more complex the architecture, the better developer I am. But the reality is that the best code is the one that's easiest to read and understand.
You're absolutely right about removing abstraction layers improving performance. I've also observed that many times we unknowingly slow down our applications by over-engineering.
"The simplest way to solve a problem is always the best" - this is the lesson I've learned too. Thanks for sharing the code examples, and yes, truly - great minds think alike!
Unlearning the desire to build complex architecture is definitely a rite of passage for every developer. It sounds like we have both learned that lesson the hard way over the years.
Thanks again for a great discussion. I look forward to crossing paths in the comments again.
Absolutely agree—sometimes the simplest solutions are the hardest lessons to learn. I really enjoyed the exchange as well. Looking forward to more great discussions in the comments!
Honestly, I don’t really like the vibe of "vibe coding". It’s great for scaffolding and getting something off the ground fast. But once you start leaning on it too heavily, it almost feels like the AI senses the dependency and starts churning out slop.
That is a perfect description. "Vibe coding" feels like borrowing time from the future at a predatory interest rate.
I have noticed that exact degradation too. Once the context gets complex, the AI starts guessing. If you do not have the deep knowledge to audit and delete that "slop" immediately, you end up with a codebase that is technically working but impossible to maintain.
i ran the same refactor task through five different models recently and most of them added code i didn't ask for. one restructured the whole file. another added a bunch of stuff that looked reasonable but wasn't in the prompt. the "code inflation" problem isn't just humans writing too much, it's AI tools silently expanding scope on every prompt.
the deletion skill applies differently now too. it used to be "delete the feature nobody uses." now it's "delete the three helper functions the AI added that you don't need but look reasonable enough that you almost kept them." the second one is harder to catch because the code actually works.
This is a fantastic observation. The silent scope expansion is exactly what makes reviewing AI-generated code so exhausting right now.
When the code is broken, it is easy to reject. But when it works and looks perfectly reasonable, it takes a lot of discipline to delete those three extra helper functions instead of just leaving them in.
You nailed the difference perfectly. We have gone from deleting unused features to actively defending the codebase against helpful but completely unnecessary additions.
This hits hard. Been building a data pipeline for the past few months and the biggest productivity unlock wasn't adding anything — it was ripping out a layer of abstraction I added "for flexibility" that nobody ever used.
The AI code generation angle makes this even more acute. When a junior can generate 500 lines in 20 minutes, the senior's job shifts from writing to triage. You're not the author anymore, you're the editor. And good editors kill darlings.
The 2% feature / 50% support tickets stat is brutal and real. I've seen the same pattern with API endpoints — the rarely-used ones are almost always the ones that cause incidents because nobody maintained them.
That point about rarely used API endpoints causing the most incidents is a hard truth. We always focus on our main flows, but the real technical debt is usually hiding in those ghost features we added just for flexibility.
Ripping out that unused abstraction layer probably did more for the stability of your pipeline than any new feature could. Good editors really do kill their darlings.
'ghost features' is exactly right. and the cost of a ghost feature isn't just the code — it's the cognitive overhead every time you touch something nearby. you have to remember it exists, consider whether your change might affect it, and verify it still works. that tax compounds across every future change.
the abstraction layer I ripped out was the same: it had never been used for its intended purpose, but it sat in the mental model of anyone reading the code, silently demanding to be accounted for. removing it didn't just reduce LOC, it removed a question that every future reader had to answer.
This is an excellent consideration to bring forward this year. The ability to simplify and create lean code that does more with less signifies a deeper understanding of the codebase and should be rewarded as such.
Spot on, Julien. I think the challenge for engineering leaders in 2026 is exactly that: figuring out how to reward the "negative space" in a project.
It’s easy to measure a feature launch, but it's much harder to measure the value of a bug that never happened because the code was simplified. We need to move away from "lines of code" as a metric and start looking at "reduced complexity" as the true sign of seniority.
Have you seen any teams successfully implementing metrics or cultures that actually incentivize this kind of subtraction?
In one of my previous teams, it would be taken as a positive to simplify/remove code whenever possible. However implemented as an actual metric no, it would be interesting to explore that further.
Making it a formal KPI is definitely risky. Goodhart's law kicks in fast and people might start deleting safety checks just to hit a number.
We just highlighted negative lines of code during sprint reviews. It was enough to signal that cleanup mattered without creating perverse incentives
Good point! Goodhart's law indeed!
Reduced complexity is our team main idea. That why I force even forget TS and choice jsDoc instead, because that is keep typesafe and compatibility with TS but we avoide a extra build complexity. But my main direction is dependency minimalism - in any language. That is very handy even rust, not just JS.
You're right, overengineering is a slippery slope that eventually leads to technical debt.
I understand what you mean about deleting code being a valuable skill, but I still wouldn't call it the most valuable one. Sometimes you need to add code sometimes tons of it, not to show off, but because it improves the product, performance, functionality, or security.
For example, last year I reviewed a friend's portfolio website. It was simple, clean, and had nothing "unnecessary" in it. But I quickly noticed it was vulnerable to all sorts of attacks under the surface, like XSS for example. I left detailed comments on the repository and he ended up spending two additional days adding safeguards and writing quite a lot of additional code lines to properly secure it.
My point is this: the most valuable skill isn't deleting code, it's still writing code and I think it will always be like that, but in the most sensible, thoughtful, and responsible way possible.
You make a very fair point, and that is a great example. Security is definitely one of those areas where adding lines of code is a non-negotiable asset.
I think we are actually talking about the same core skill: judgment. My focus on deletion is mostly a reaction to the massive wave of low-value, AI-generated bloat we are seeing right now. But you are absolutely right that being a responsible developer means knowing when to add the necessary safeguards, even if it makes the codebase bigger.
The goal isn't just less code, it is the right code. Thank you for bringing that perspective.
Absolutely, couldn't agree more. Great article!
Thank you Giorgi!
This resonates a lot with me.
As someone building AI-powered and full-stack products, I’ve realized that shipping fast is easy in 2026 — especially with AI — but maintaining clarity is the real engineering skill. Every abstraction, every dependency, every “future-proof” config adds cognitive load to the system.
In one of my recent projects, removing an unnecessary state library and simplifying the data flow improved performance more than any “new feature” I added. The code became easier to reason about — and that’s real productivity.
Writing code is creation.
Deleting code is judgment.
And judgment is what separates developers from engineers.
"Writing code is creation. Deleting code is judgment." That is a brilliant way to frame it. I might have to quote you on that in the future.
State management is the absolute perfect example. We always reach for the heavy libraries on day one, convinced the app will be massive. Ripping that global state out later and realizing standard data flow works fine is the best feeling.
Keystrokes are cheap now. Judgment is the actual bottleneck.
That really means a lot — feel free to quote it 😄
You’re spot on: premature architecture is just overengineering in disguise. AI made keystrokes cheap, but judgment, restraint, and knowing when not to abstract — that’s the real senior skill.
Consider the quote officially stolen.
Fighting the urge to build for scale on day one is definitely the hardest habit to break. Thanks for a great back-and-forth. See you in the next thread.
Awesome observations. Look at some of the AI augmented PRs and diffs. Who will ever correctly review them? (Please don’t answer AI).
From what I can tell, AI loves to add “helper” functions everywhere, often 1 line! Complex functions to validate data patterns never seen on Earth. As you said, features nobody asked for or even understood.
I promise my answer is not AI. 😂
You are completely right about the helper functions. The models are trained to be incredibly paranoid, which results in them writing defensive code for edge cases that will never happen in reality. Those one-line validation functions for impossible data patterns are the perfect example of software inflation.
Reviewing a massive AI-generated PR full of those things is a nightmare. The only correct way to review a 500-line diff full of unasked-for features is to simply reject it and ask the model for the bare minimum.
‘xactly
"The most valuable skill in 2026 isn't writing code. It is deleting it."
Haaahahaa :DDD ... Always has been!
(BTW. Just couldn't agree more.)
Haha, you are completely right. It has always been the best skill to have. Now it is just becoming impossible to ignore. Glad you enjoyed the article.
The delete key is the most underrated tool in any engineer's toolkit.
I've noticed the developers I trust most have a lower "lines of code per feature" ratio than junior engineers. Not because they're slower — they're faster — but because they understand that every line you add is a liability. You have to read it, test it, maintain it, and explain it to the next person.
The counterintuitive thing: AI is making this worse in the short term. People are generating 500 lines of code where 50 would do, because generating is fast. The skill is knowing what to delete after generation. That's still hard.
It is the best indicator of true seniority.
When AI can generate 500 lines in seconds, the illusion of productivity is very strong. It is easy to confuse volume with actual value. But as you mentioned, someone still has to read, test, and maintain every single line.
Editing that raw output down to the 50 lines that actually matter is the real engineering work today. Great observation.
Spot on for 2026!
Pruning AI-generated cruft exposes the real system. (I once refactored a 10k-line LLM service to 2k—same perf, zero tech debt)
The new interview: "Delete 50% of this codebase. Explain why" :)
Codebase value concentrates as code shrinks ! 🚀
I bet the maintainability of that service went through the roof.
I absolutely love that interview concept. We spend so much time testing candidates on their ability to write new algorithms, but almost zero time testing their ability to read and simplify existing ones.
If a candidate can look at a module and confidently say "we can delete this half because it is just legacy bloat," that is an instant hire for me.
True! I am starting to think, from now on I am gonna keep this as an evaluation parameter for candidates while hiring.
This is exactly the mental shift that separates seniors from juniors in 2026. Everyone can generate code now — the differentiator is knowing which generated code to keep and which to throw away.
I've seen teams drown in AI-generated boilerplate because they treated every suggestion as gospel. The real skill is asking "does this actually solve my problem, or did the model just throw its most common pattern at me?"
The "Code Janitor" framing is perfect. Deletion isn't just cleanup — it's curation. In a world of infinite generation, the engineer who can say "no, we don't need this" is more valuable than the one who can write it.
Your digital hoarding analogy hit hard. My "read later" list and my codebase have a lot in common — both are full of things I thought I'd need someday but never touched.
Great piece!
The parallel between a "read later" list and a codebase is spot on. We are just hoarding text in different formats.
You hit the nail on the head regarding AI boilerplate. The models are designed to be helpful, which usually means they over-deliver. They will hand you a massive factory pattern when all you needed was a simple function.
Being the person who can look at 50 lines of perfectly generated code and just say "no thanks" is absolutely the new senior skill.
I think the key is controlling that thin boundary between the layers where you make architectural decisions and where you hand things off to generative AI.
It’s really a system of abstraction levels. At the business level, you could say “build me a program that makes a million dollars” and delegate everything below. Or you can consciously design each layer yourself and use AI as a tool that translates your thinking into working, understandable components that you can combine into a product.
Right now, I’ve chosen to use AI to generate mostly isolated modules — things I could have written myself and fully understand. Then I treat them as reusable building blocks.
I was already working this way before AI, but now it feels like the optimal approach. Fixing, debugging, or vibe-checking a module is limited to its local context, so cognitive load doesn’t explode — and I don’t feel tempted to outsource all decisions to the model
Owning the boundaries is exactly the right approach.
If you let the model architect the entire system, you are basically just a passenger. Generating isolated modules that you actually understand is the absolute sweet spot right now. It keeps the cognitive load entirely local.
The moment you ask the AI to start wiring all those modules together is when the technical debt really starts compounding. Treating them as reusable, human-verified building blocks is the only way to stay sane.
Agree, but I’d argue the real leverage is earlier in the process: writing code where there’s less code to delete in the first place :)
AI absolutely makes it easier to generate glue code and even bypass frameworks. But then generated code still has a lifecycle. So the question is - where complexity lives?
I've seen many systems, that repetitive problems push into well-designed libraries, that keeps codebase smaller and more disposable, even if AI is writing the "glue".
I see this a lot in document pipelines: teams start with AI-generated parsers, but over time the maintenance surface grows. Moving that into a dedicated SDK shrinks the codebase and makes refactors safer.
So maybe:
-AI for glue and experimentation
-libraries for concentrated complexity
-your code for actual differentiation
That is a really strong way to frame it. Pushing the concentrated complexity down into dedicated libraries is exactly how you survive the AI glue code.
Your breakdown at the end is a perfect mental model for modern architecture. If we only use our own code for actual differentiation, the maintenance surface stays incredibly small.
It is much easier to delete and rewrite a thin layer of AI generated glue than it is to untangle a custom parser that has grown out of control. Thank you for adding this perspective.
The line “Code is not an asset. It is a liability.” really stands out. In data engineering and ML systems, unused features and defensive abstractions often create more instability than value. With AI lowering the cost of generation, the leverage shifts to curation. Senior engineers don’t just build — they reduce surface area.
Reducing surface area is such a great way to put it. You are completely right about ML and data engineering systems. When you have complex pipelines, every unused abstraction or defensive feature is just another place for things to silently fail.
As the cost of generating code drops to zero, curation is definitely where the real engineering happens. Thank you for reading and sharing that perspective.
YES. I wrote about this same topic this week because it's so underappreciated.
The hardest part isn't finding code to delete — it's convincing the team. Nobody wants to be responsible for deleting something that "might be needed someday." I've found that framing it as risk reduction works better than "code cleanup":
"This unused auth module has 3 unpatched CVEs in its dependencies. Deleting it eliminates the attack surface and saves us from maintaining code nobody uses."
Security risk + maintenance cost > "it's messy." Managers respond to the first, not the second.
My biggest deletion win: 8,200 lines of a deprecated auth system. Saved $340/month in Redis costs for a cluster that only the dead code was using.
8,200 lines and killing a useless Redis cluster is the absolute dream.
You are completely right about the framing. "Code cleanup" sounds like a low-priority chore to management, but "attack surface reduction" sounds like an urgent necessity.
Tying deletion directly to infrastructure costs and CVEs is a brilliant way to get buy-in from the people holding the budget.
I have been in this field for eight years, and I still invest the same effort in development as before.
AI helps with code and repetitive tasks, but the ideas, planning, logic, design, debugging, and execution are still mine. Tools can generate code, but experience shapes the structure and quality of an application.
AI is helpful, but it cannot replace real experience...
This is exactly how it should be. The total effort does not go away, it just shifts from typing to thinking.
We should absolutely use the AI that is available for help with the repetitive parts, but the architecture and the logic have to come from a human. Eight years of experience is exactly what lets you look at generated code and know immediately if it actually fits the design or not.
Tools generate lines. Experience generates systems.
Hi, this article reminds me of an employee who drove me crazy. He liked to experiment and find the best solution for a problem, and I appreciate that about him, but he usually kept the code he discarded commented out on the side. When I had to do some maintenance and load his files and had to find the code buried in comments, that was exasperating.
I asked him countless times why he did such a thing, and he usually responded that he was afraid he might lose that code and that it might be helpful in the near future.
That is a classic example of digital hoarding. I completely understand your frustration.
The fear of losing code is so common, but that is exactly what version control is for. Git remembers everything so our files do not have to. Leaving graveyard code commented out just transfers the cognitive load to the next person who has to read the file.
It sounds like he really needed to learn to trust the commit history.
I’ve been sitting here thinking about your point on 'digital hoarding,' and it honestly hit home. It’s so easy to fall into the trap of thinking that a busy GitHub contribution graph equals progress. But you’re right, when we can generate a thousand lines of code with a single prompt, 'more' usually just means 'more to fix later.'
I love the shift in perspective from being an 'Architect' to being a 'Janitor.' There’s a certain kind of maturity in realizing that a clean, empty room is more functional than one filled with 'just in case' furniture. Deleting that 2% feature that caused 50% of the headaches is a massive win for sanity, not just for the codebase.
Thanks for sharing these thoughts. It’s a great reminder that our value isn't in how much we can build, but in how much we can simplify. I’m definitely going to look at my next PR through the lens of 'what can I remove' instead of 'what can I add.
"Just in case furniture" is a brilliant way to describe it.
You brought up a great point about the GitHub contribution graph. We are so conditioned to chase those green squares as a metric of productivity. Unlearning that habit and realizing that a massive pull request is often a warning sign, rather than a badge of honor, is a very difficult but necessary shift.
Going into your next PR looking for things to remove is exactly the right mindset. A deleted line of code never causes a production bug.
Really insightful take! In a world where AI can churn out endless lines of code, the real skill isn’t producing more — it’s knowing what can safely be removed to simplify, improve maintainability, and reduce long‑term cost. The idea that deletion can be as valuable as creation really flips the traditional productivity mindset. 💡
Flipping the traditional productivity mindset is exactly the goal here. When lines of code are practically free to generate, we have to stop measuring our worth by how much we write. Thank you for reading and summarizing the core message so well.
This resonates a lot! Especially the idea that code is a liability.
I’m not a traditional developer, but building products with AI has made this painfully clear. The easiest part today is creating. The hard part is living with what you created.
I’ve started noticing the same pattern in my own projects: features built “just in case”, abstractions that felt smart at the time, and complexity that quietly accumulates.
Deleting feels scary... but it’s usually the moment a product becomes clearer.
Curious: do you think AI will push more teams toward smaller, simpler codebases, or the opposite?
That is a great observation. It is fascinating that this pain point is just as obvious even if you do not come from a traditional developer background.
To answer your question: in the short term, AI is definitely pushing teams toward massive, bloated codebases because generation is so cheap and fast. We are already seeing this happen. However, in the long term, I believe the successful teams will be forced to swing back toward smaller, strictly constrained codebases. The teams that just keep accumulating AI-generated complexity will eventually drown in their own maintenance costs.
Deleting is scary at first, but the clarity you get is always worth it.
I’m very aligned with the “subtraction” instinct (deleting dead code, shrinking surface area, and resisting speculative abstraction usually pays off fast) but I disagree with the framing that “code is not an asset” and that the value of writing it “approaches zero” because AI can generate it: code can absolutely be a compounding asset when it encodes durable decisions (domain models, stable contracts, security boundaries, billing/compliance workflows), and AI makes it easier to produce text but not automatically easier to produce correct systems under real constraints (edge cases, failure modes, migrations, security, operational resilience). Also, simplification isn’t always deletion—sometimes you simplify by adding LOC in the form of explicit invariants, tests, observability, and clearer boundaries that reduce risk and cognitive load even if the diff is “bigger.” I’d boil the senior skill down to judgment, not janitorialism: knowing what deserves to exist, what must be hardened, what should be removed, and what can be avoided entirely.
Thank you for the thoughtful pushback. You make a very compelling point.
I completely agree with your distinction. When I refer to code as a liability, I am mostly talking about commodity code, like boilerplate and standard logic that AI can generate in seconds.
You are absolutely right that code encoding durable decisions, like domain models and security boundaries, remains a compounding asset. Your point that simplification sometimes requires adding lines of code, such as explicit tests and better observability, is a crucial nuance.
"Judgment, not janitorialism" is a brilliant way to summarize the overarching skill. Janitorial work is just the most visible symptom of exercising that judgment right now.
Great perspective. I really appreciate you adding this nuance.
The "digital hoarding" analogy really hit home. I spent a month building an elaborate plugin system for a personal project, convinced I would need it "someday." Eventually I ripped the whole thing out and replaced it with a 40-line script that did exactly what I needed. The relief was immediate — not just in the codebase, but mentally.
Your point about code being a liability rather than an asset is especially true now that AI can regenerate boilerplate in seconds. The real skill is knowing what NOT to build in the first place. I have started asking myself "can I solve this with configuration instead of code?" before writing anything, and it is surprising how often the answer is yes.
Replacing a month of architecture with a 40 line script is painful but also the best feeling ever.
You nailed the psychological part. Code isn't just bytes, it is mental load. Every abstraction is just one more thing you have to remember later.
I love the configuration vs code rule. I am definitely stealing that idea.
This hit close to home. I built 80+ automation scripts in two days for a side project pipeline. Felt incredibly productive... until I realized half of them overlap or do things I could consolidate into 10 well-designed ones.
The irony: I automated the creation of automation scripts. Peak code hoarding.
Now I'm going through the painful but necessary process of deleting the ones that were 'just in case.' Your framing of code as liability rather than asset is exactly the mental shift that makes deletion feel like progress instead of loss.
One thing I'd add: the hardest code to delete is the code that works but isn't needed. Broken code is easy to kill. Working code that serves no real purpose? That's the hoarder's trap.
Automating the creation of automation scripts is the ultimate developer trap. I love that you shared that.
Your last point is the absolute truth. Broken code is just a bug, so deleting it is easy. But perfectly working code feels like an asset, even when it is just dead weight.
It takes actual discipline to throw away something that functions perfectly just because it solves a problem you no longer have.
This resonates deeply. The shift from "code quantity" to "code curation" is the real Senior leap.
But here's the harder truth: deletion requires more architectural understanding than creation. AI can generate that "future-proof" abstraction in seconds. Only experience knows it's premature optimization.
The crisis isn't just maintenance cost—it's cognitive load. Every "might need later" feature is a decision tree your brain keeps open. That's why your mental model cleared after deletion.
One addition: robustness matters more now. C#/.NET's strong typing catches AI hallucinations at compile time. Weakly-typed generated code? Production roulette at 3 AM.
The Janitor role is undervalued because we don't measure prevented complexity. Maybe that's the metric shift we need.
"Production roulette at 3 AM" is the best description of weakly-typed AI code I have heard. That is exactly why strict typing (like TypeScript for me) is completely non-negotiable now. The AI is simply too confident when it guesses.
You also nailed the part about the Janitor role. We have dashboards for lines added and PRs merged, but zero metrics for "complexity prevented."
It is a massive blind spot for the industry right now. Deleting code really does require more architectural vision than writing it.
If you want to be a Senior Engineer in this new era, stop asking "What can I add?" and start asking "What can I remove?"
Not if you have a bad leadership who lakes technical skills and only care about how many lines of code you have written vs the optimal way of getting things done. Then it becomes a “performance” issue based on some bogus metrics.
If leadership is still measuring lines of code in 2026, you don't have a performance problem. You have a resume problem.
Counting lines is like measuring an airplane's quality by its weight.
Honestly, if I saw that metric today, I would just start interviewing immediately.
The Era of the Code Janitor" — I'm stealing this phrase. 👏
Your point about features used by 2% of users causing 50% of support tickets is painfully accurate. I had a similar realization last month when auditing our background job system: a "simple" retry mechanism we added "just in case" was actually causing cascading failures during downstream outages.
We ended up deleting 3 custom retry implementations and replacing them with a single library that actually handles backoff correctly. The system became more reliable with less code.
It's counterintuitive in the AI era, but you're right — the real skill is knowing what not to build (or what to delete).
Would love to hear more about your cleanup process. How do you decide what stays vs what goes?
Please steal the phrase. Your story about the custom retry mechanism is incredibly relatable. We have all built something to prevent a failure, only to realize our safety net actually caused a bigger outage.
To answer your question about the cleanup process: I usually look at the ratio of maintenance cost to actual user value. If a piece of code requires constant babysitting but only serves a tiny edge case, it goes on the chopping block.
I also actively look for custom solutions we wrote in the past that can now be replaced by a well-maintained library, exactly like you did. If an open-source team maintains it better than I do, my code gets deleted.
You keep iterating and refactoring until the desired architecture emerges. It took about five or six rounds of refactoring in my project, Almadar, to reach the right level of abstraction. I started with a simple, monolithic project and eventually split it into a monorepo with separate packages (using TypeScript, React, and Express). It took a while to stabilize, but it was completely worth it.
In the past, this much refactoring would have been overkill. In the AI age, however, it is essential. You need an architecture that constrains the AI, enabling precise, surgical edits without introducing its typical bloat. Managing that bloat was my biggest challenge, but after many sleepless nights, I finally have it down to a science. Thanks for the article.
Refactoring to constrain the AI is an incredibly smart way to frame it.
We used to build architecture primarily to help human developers navigate the codebase. Now, we have to build it to put strict boundaries around the models so they cannot pollute the entire system at once.
Your journey to a monorepo makes perfect sense in this context. By splitting the project into isolated packages, you force the AI to only look at one specific context at a time. That is exactly how you prevent the massive bloat it naturally wants to create.
Making surgical edits inside strict boundaries is the only way to scale right now. It sounds like those sleepless nights definitely paid off.
"The Era of the Code Janitor" — honestly this needs to be a conference talk.
Building automated pipelines, I noticed something similar: AI-generated code is verbose by default. It hedges, adds fallbacks, covers edge cases you didn't ask for. The raw output is often 3x the size it needs to be.
The highest-leverage skill now isn't knowing how to generate code, it's knowing what to delete afterwards — which requires understanding intent and context that the AI doesn't have.
One pattern I've found useful: after any AI-assisted build, do a dedicated "deletion pass" before shipping. Ask not "does this work?" but "what can I remove without changing behavior?" Usually cuts 20-30% of the code.
Paradoxically, this raises the bar for the developer reviewing AI output. You can't delete what you don't understand.
"You cannot delete what you do not understand" is the absolute perfect summary of this entire shift in our industry.
The idea of a dedicated "deletion pass" before shipping is a fantastic habit. As you mentioned, AI models are essentially crowd-pleasers. They throw in every possible fallback and edge-case handler just to be safe. Stripping that raw output back down to reality requires actual domain knowledge that the model simply does not have.
I am definitely adding a mandatory deletion pass to my own workflow. That is exactly the kind of janitor work that separates a senior engineer from someone who just accepts the first generated output.
So well said — deleting code is real senior-level engineering.
Thank you Neeta!
Hey!
Could you tell me where i need to delete some code on my supply chain scanner ?
Thank you a lot!
Send me an email (you find it in my profile) and i can take a look!
This is a beautiful narrative. "The job is no longer to build the mountain but to carve the sculptor out of the rock."
Glad that metaphor landed with you. It is a hard mental shift when we are so used to measuring value by volume, but it feels inevitable now.
I like this - the KISS principle, "less is more" :-)
Exactly. The irony is that it usually takes more time to build something simple than something complex. Complexity is the path of least resistance.
Yes - keeping things simple is (often) HARD ...
Thats very true!
That's right. Minimalism is the key
Its true!
No one needs a drill. Everyone wants a hole.
Curious, what's the one piece of code YOU'VE deleted that made the biggest difference in your career? Would love to hear your story!!
That is a great question. I actually just mentioned this in another comment thread. The biggest difference for me was ripping out a massive, over-engineered state management library in a recent project and replacing it with a simple, standard data flow.
Deleting that entire abstraction layer improved the actual performance of the app more than any new feature I had added that year. It really taught me that complex architecture is often just a symptom of not fully understanding the core problem.
What about the art of deciding what to build and what not to?
Deciding what not to build is essentially deleting code before it even exists. It is the most effective way to keep a system lean, but it is also the hardest because it requires saying no to stakeholders and ideas that feel good in the moment.
If we can master the art of subtraction at the requirements level, we save ourselves from the janitorial work later on.
interesting
I agree that code volume is a real problem in 2026. AI tools generate faster than anyone can review, and the instinct to subtract is healthy. Codebases do bloat, and most of the bloat comes from code that nobody remembers writing. The observation is correct.
But I don't think the core issue is too much code. I think it's code that doesn't know why it exists.
I've been building open-source tools for personal data preservation for the past two years. What started as a single conversation archiver turned into a lot of interlocking projects: conversation, bookmark, ebook, photo, and email managers, a medical record consolidator, a universal data format, a dead man's switch, client-side encryption for HTML files, and eventually a conversable persona assembled from the combined archive.
All of them use the same stack: SQLite for structured queries, JSONL for interchange, Markdown for human reading. All are self-describing (every database comes with a README explaining its format). Most expose their data via MCP so Claude can query them directly. Many export to self-contained HTML files you can host on a static site or open from a USB drive. I didn't plan an ecosystem. I built the next thing I needed, over and over, and at some point they converged into a coherent stack.
Here's what I noticed: I have never once needed to delete one of these tools. Not because I'm especially disciplined about code quality. Because each one exists for a specific reason that connects to the others. When you build from a clear constraint, unnecessary code doesn't get written in the first place. The purpose does the filtering before any code exists.
That's why I think the most valuable skill isn't subtraction. It's knowing what you're building toward. Deletion is retrospective correction for a problem that clear intent would have prevented.
I wrote a longer version of this with the full project list: Code Without Purpose
This is a really good point and I like the framing.
Subtraction is the late stage cleanup. Clear intent is the early stage filter. If the reason is obvious and stays attached to the code, you avoid a lot of the bloat in the first place.
I also love the “self describing” idea, a README next to every dataset, simple formats, and exports that survive the tool. That feels like the same philosophy, just applied to data instead of code.
I will check out Code Without Purpose. One question though. As the ecosystem grows, how do you keep the original why from getting fuzzy. Do you keep a short decision log somewhere, or is the dataset README enough.
The phrase 'Deletion is retrospective correction for a problem that clear intent would have prevented' is incredibly spot on.
It actually mirrors exactly how I try to think about Sigilla. The entire app is built around one strict constraint: unread links decay and archive themselves. Because the intent is so narrow—curing digital hoarding—there is simply no room for feature bloat or code that does not know why it exists. The purpose filters out the noise before I even open my editor.
I also really respect your focus on data preservation. Building for true ownership with Markdown and SQLite is exactly the way to go.
I am going to read your full post now. Thanks for adding this angle to the discussion.
This hit close to home. Last month I inherited a Node.js service that had grown to ~15k lines over two years. Half of it was "flexible" config parsing that nobody actually used beyond the defaults. Ripped it out, hardcoded the three configs we actually run in production, and suddenly the whole team could reason about the service again.
One thing I'd add though - the hardest part isn't the deleting itself, it's convincing your team it's safe. I've found that good test coverage is what gives you the confidence to delete aggressively. Without tests, every deletion feels like defusing a bomb blindfolded. Do you have a process for validating that removals don't break things, or is it mostly gut feel + monitoring?
"This is the most senior dev take I've read all year.
Junior devs think code is asset. Senior devs know code is liability.
Every line you write is:
Something that can break
Something someone has to maintain
Something that adds complexity
The best code I ever wrote? The code I didn't write.
10x developers aren't the ones writing 10x more code. They're the ones deleting 10x more code.
🛐🔥"
Asset vs liability is the perfect framing.
We spend years learning how to write code, but nobody teaches us when not to write it.
My favorite PRs are always the ones with more red lines than green.
This was true always. The AI people say we no longer need to review code. We just need to update our workflow. I think the job shifted from writing to reviewing the code. But the next step is definitely the development of autonomous agents that won't need any code review. Coding will then be abstracted. Fun times.
I’m not writing code as much as I used to — I’m writing prompts...and I like it
But that doesn’t remove the need to understand architecture, read the code, and do proper code reviews. If anything, it makes those skills more important
Working with generative AI feels like having a very capable junior developer : fast, productive, but still needing clear, well-scoped tasks and supervision.
The quality of the result depends heavily on how precisely the task is formulated.
I’m trying to design applications as isolated modules, so changes in one area don’t break stable parts of the system.
With AI-generated code, maintaining clear boundaries and responsibility separation becomes even more critical.
The maintenance cost point took me a while to internalize. Spent months treating AI code generation as a pure win - more output, same time. Then we did a sprint just reviewing our 3-month-old AI-assisted code and found half of it was solving problems we'd already solved elsewhere, just in a new file with a different name.
The forcing function that helped: require every feature branch to end with a 'what does this add to the maintenance burden?' line in the PR description. Not 'what does this add?' - specifically the maintenance burden. Makes the janitor work visible in a way that just reviewing code doesn't.
The bundle size / mental model connection you made is real too. They're both proxies for the same thing: how much stuff is in there that doesn't earn its place.
It reminds me this old anecdote of the author of QuickDraw
folklore.org/Negative_2000_Lines_O...
writing code
Hey! I'm a little late to the party. But I agree! Noise reduction is crucial these days.