The future of engineering was supposed to look like this:
- AI handles the boring stuff.
- Humans focus on architecture, hard problems, and product.
What it often looks like instead:
- juniors (and some seniors) blasting Claude or other LLMs with vague prompts,
- copy‑pasting whatever comes out into massive PRs,
- you spending your week diff‑diving, untangling “AI slop,” and explaining basic trade‑offs in code review.
The robots didn’t take your job.
They just turned you into an underpaid bot babysitter.
AI didn’t automate away my work. It automated the creation of bad work and left me responsible for all of the cleanup.
Meet the “Vibe Coder”: Shipping PRs by Feel, Powered by LLMs
Every team now has at least one vibe coder.
You know the profile:
- never reads the existing codebase deeply,
- never asks about architecture,
- but can generate 1,000+ lines of “looks impressive” code in an afternoon.
Their workflow:
- Paste a ticket into Claude / another LLM.
- Get a wall of code, tests, and maybe even docs.
- Lightly tweak variable names.
- Open a PR with a confident description: ""
"Implementing X feature with full test coverage."
From the outside, it looks like high output:
- lots of commits,
- lots of lines changed,
- lots of green checkmarks when basic tests pass.
On the inside, you see:
- duplicated logic the model invented because it didn’t see existing utilities,
- inconsistent patterns across modules,
- subtle bugs around edge cases, error handling, security.
Vibe coding is when the code ‘feels’ right because the diff is big and the AI sounded confident, but nobody actually sat down to think.
AI Sludge as a New Form of Tech Debt
People talk about “AI slop” like it’s just ugly code.
It’s bigger than that.
We’re seeing patterns like:
- Code duplication explodes – analyses show AI‑assisted code can double duplication and dramatically reduce refactoring.
- Maintainability drops quietly – LLM‑generated code may pass tests and even have fewer obvious bugs, but hides structural issues and unnecessary complexity.
- Surface‑level quality looks fine – style, docs, and naming look better… right up until you try to change anything non‑trivial.
Open source maintainers are already burning out from a flood of low‑effort AI PRs that:
- don’t solve real issues,
- add generic utilities that already exist,
- introduce subtle bugs for marginal gains.
Inside companies, the same dynamic is happening:
- more code, more diffs, more “AI‑accelerated” changes,
- without any corresponding increase in architectural thinking.
AI is building a new class of tech debt: everything looks modern and well‑typed until you try to evolve it, and then the whole thing fights back.
The Senior’s New Job: AI Janitor, Therapist, and Politician
If you’re mid/senior, you’re probably seeing your role drift:
- less time building systems,
- more time reviewing, rejecting, and explaining AI‑generated changes.
Your real job description becomes:
- AI janitor – identify duplicated logic, unnecessary abstractions, and hallucinated APIs; refactor or rewrite them.
- Code therapist – explain to enthusiastic AI‑power users why their 1,200‑line PR is not a “productivity win.”
- Team politician – push back on leadership who only see output metrics and not the long‑term maintenance cost.
Meanwhile, company narratives sound like:
- “AI has doubled our feature velocity.”
- “Look how many lines of code we’ve shipped this quarter.”
Nobody is tracking:
- how many critical bugs came from rushed AI‑generated changes,
- how much time seniors spend auditing vs building,
- how much harder future projects become because the codebase is now stitched together by vibes.
We’re measuring AI success in story points and PR counts, and measuring human burnout in private 1:1s and late‑night Slack messages.
AI as a Corporate Politics Weapon
There is another uncomfortable layer to this:
AI tooling doesn’t just change code.
It changes office politics.
If you know how to:
- generate polished architecture diagrams, long docs, and “vision” slides with AI,
- push big, flashy PRs that look ambitious,
- speak confidently about “leveraging LLM productivity,”
you can look like a star to non‑technical leadership.
Even if:
- your changes are hard to maintain,
- your “AI‑driven refactor” quietly destabilizes core flows,
- senior engineers repeatedly warn about long‑term risk.
The people who:
- optimize for perception,
- “ship” with AI,
- and keep talking about “innovation”
get rewarded.
The people who:
- read the diff carefully,
- ask annoying questions about edge cases,
- block PRs that don’t meet standards,
get labeled as:
- “slowing things down,”
- “resistant to AI,”
- “too negative.”
AI didn’t end office politics. It just gave the loudest people a way to generate more ‘evidence’ of their impact without doing more thinking.
Not All AI Code Is Bad — That’s What Makes This Hard
Here’s the twist:
- Some studies show LLM‑generated code can have fewer obvious bugs and be easier to patch than human code for certain tasks.
- Some leads argue they prefer “boring LLM output” over wildly inconsistent human code.
They’re not entirely wrong.
AI can be:
- great at boilerplate,
- consistent in style,
- surprisingly solid on well‑scoped problems.
The problem is not “AI code = bad, human code = good.”
The problem is:
- unskilled prompting + zero context + no review discipline + wrong incentives = sludge,
- and we’re throwing that into complex systems that absolutely require someone to think several steps ahead.
AI can absolutely write decent code. But decent code, dropped thoughtlessly into a complex system, is still a bad change.
How to Stop Being a Bot Babysitter (Without Becoming a Dinosaur)
If you’re a senior developer, you don’t fix this by rejecting AI.
You fix it by setting constraints and changing the default workflow.
1. Make AI usage explicit, not sneaky
- Require PR descriptions to state when AI was heavily used.
- Normalize saying “AI assisted” instead of pretending everything came from pure genius.
This isn’t about shame. It’s about review expectations.
2. Raise the bar on review for AI-heavy PRs
- Smaller, focused PRs over giant “AI dump” diffs.
- Require tests that demonstrate understanding, not just generated snapshots.
- Explicit rationale for architectural decisions, not just “the model chose this.”
3. Build a checklist for AI‑generated code
Teams and maintainers are already doing this:
- check for duplication (search for repeated blocks across the repo),
- check for existing utilities the AI ignored,
- check for security issues and edge cases,
- check integration points instead of just local correctness.
Automate what you can:
- linting, duplication detection, basic structural checks.
4. Train people to use AI like an engineer, not a copier
- start from the existing architecture, not from a blank prompt,
- use AI to propose variations, not final answers,
- require devs to explain any AI‑generated code in their own words during review.
If someone can’t explain it, they don’t merge it.
The goal isn’t to stop AI writing code. It’s to stop people from outsourcing their thinking and calling it productivity.
If You’re a Senior: Redefine Your Job Before Someone Else Does
AI is not going away.
Companies will keep buying tools and pushing for “higher productivity per engineer.”
You have a choice:
- be the burned‑out reviewer who cleans up after everyone,
- or be the person who designs how AI fits into the engineering process.
That looks like:
- owning guidelines for AI‑assisted coding,
- building or choosing tooling that catches obvious AI‑slop,
- mentoring your team on using models as partners, not crutches,
- educating leadership on the difference between velocity and sustainable delivery.
Long term, the people who win are not:
- anti‑AI gatekeepers,
- AI‑only “vibe coders.”
It’s the ones who can:
- think clearly,
- architect systems,
- leverage AI responsibly,
- and protect the codebase from looking productive while decaying inside.
AI will not replace you. But someone who understands both engineering and AI—and refuses to be a quiet janitor—absolutely can.
I fix the Angular apps that generalists break.
I’m Karol Modelski, senior Angular developer and frontend architect rescuing legacy B2B SaaS frontends.
If your Angular app is slowing your team down, start with a 3‑minute teardown of your current setup: https://www.karol-modelski.scale-sail.io/



Top comments (0)