A confession, a small identity crisis, and the weird guilt nobody warned us about when coding suddenly became… easier.

I had this moment the other day I typed half a function name, hit enter, and my editor spit out a full implementation like some overconfident dungeon NPC doing my fetch quest for me. And instead of feeling impressed, my first thought was:
“Wait… is this cheating?”
It wasn’t fear of losing my job. It wasn’t even the classic “AI will replace developers” spiral you see on every tech timeline.
It was something much dumber:
I felt like a student copying homework from the kid who actually studied.
Except the kid is an LLM that doesn’t sleep and somehow knows obscure Postgres syntax I didn’t even know existed.
And that feeling that tiny punch of guilt kept popping up.
Like when AI explained a bug in my own code more clearly than I could. Or when it generated a whole Express middleware and I just sat there nodding like I totally understood what just happened.
Somewhere between Copilot, Cursor, and Claude, coding shifted from “craft” to “co-op mode,” and none of us got patch notes for the identity changes that came with it.
Even memes know what’s up.
You’ve seen the one:
“Me: writes 3 letters. AI: I took the liberty of building your entire SaaS startup.”
But here’s the twist:
The more I tried to avoid leaning on AI to “prove” I could still do things manually the worse my code (and my mood) got. Turns out guilt is terrible developer tooling.
TL;DR:
This article digs into why AI makes some of us feel like impostors, why that guilt is outdated, how AI actually exposes your true engineering skill, and how to use these tools without feeling like you’re breaking the dev honor code.
When AI feels like submitting someone else’s homework
There’s this weird moment every AI-assisted dev hits: the instant when your cursor blinks, you type a vague suggestion like valida and your editor confidently slaps down a perfectly structured validation pipeline with edge cases you didn’t even think of.
And instead of the dopamine hit you should feel, your stomach does a tiny flop like you’re handing in an assignment someone else wrote.
It’s irrational, but it’s real.
For years, our self-worth as engineers was tied to grinding through complexity. Struggling was proof of effort. Debugging was a rite of passage. If you spent hours deciphering an AWS error message written in Hieroglyphics, you earned a badge of honor.
When AI removes that struggle? It almost feels like you skipped the queue.
I had a moment where AI generated a regex so cursed yet so correct that I just stared at it like it summoned an ancient god. My lead later commented, “Nice pattern matching!” and I panicked because I couldn’t explain a single character of it.
Is that cheating?
Or is that… efficiency?
Let’s be honest: this isn’t the first time developers have “cheated.” We’ve always relied on shortcuts:
- Rails scaffolding basically spit out half a project.
- StackOverflow answers were copy-pasted like sacred scripture.
- Packages on npm do 90% of what we used to build by hand.
- AWS CDK generates infrastructure I absolutely could not write raw in CloudFormation without emotional damage.
We didn’t call any of that cheating. We called it “best practice.”
The truth is AI didn’t invent shortcuts it just removed the friction. And friction is what our egos were secretly clinging to.
What makes the guilt sting is that AI feels too good. Too fast. Too senior. It produces code that reads like someone with 10 more years of experience than me. And sometimes? That feels like I’m turning in homework written by a future version of myself.
But maybe that’s the point.
But cheating at what, exactly?
At some point I had to sit myself down and ask the question nobody really wants to face head-on:
If using AI is “cheating,” then what game do I think I’m playing?
Because when you zoom out, the idea starts to fall apart. Coding has never been a pure test of raw manual ability. It’s always been layers on layers on layers of abstraction until the original metal is so far away it might as well be a legend.
I mean, be honest when was the last time you wrote assembly?
Or manually malloc’d memory for your Next.js side project?
Exactly.
We didn’t accuse ourselves of cheating when we stopped fiddling with BIOS settings and started using Docker. We didn’t feel morally compromised when we swapped servers for Kubernetes or when React stopped us from manually patching the DOM like medieval peasants.
We celebrated that stuff.
AI is just the next abstraction layer.
A very fast, slightly overeager layer that sometimes hallucinates npm commands and gives you TypeScript types forged in dreams but still, an abstraction.
The fear isn’t about losing “skill.” It’s about losing control.
And control has been an engineering myth for years.
Think about debugging.
I’ve had nights where AI suggested the exact fix a missing await, a subtle off-by-one, a race condition buried like treasure in an old PR but I still had to confirm it, reason about it, and make the call.
That’s engineering.
Not typing. Not memorizing API calls.
Judgment.
Chess players didn’t get worse when engines became insanely strong. They got smarter. They learned patterns they couldn’t have discovered alone. The same thing is happening here.
So what exactly are we cheating at?
Typing?
Guessing?
Remembering obscure API shapes?
Those were never the real skills.
Real engineering lives in decisions, not keystrokes.
The real danger: becoming a code reviewer NPC
Here’s the part nobody puts in their LinkedIn thought-leadership posts:
AI can turn you into a code-reviewing NPC if you’re not careful.
You know the role the character who doesn’t actually play the game, they just stand by the quest board squinting at tasks other people created.
That’s what it feels like when AI starts generating entire modules and your only job is to thumbs-up or thumbs-down hundreds of lines you definitely did not write, but now legally have to maintain.
At first it feels powerful.
Then it feels exhausting.
Reviewing code is a completely different muscle than writing it.
One is creative flow.
The other is reading documentation for a feature you don’t even use.
I remember asking Claude to write a CLI tool for some repetitive devops nonsense I didn’t feel like dealing with. It delivered a 200-line TypeScript file with flags, parsing, error handling, colorful output the whole thing looked like someone forked the AWS CLI and shrunk it.
My brain instantly switched into “loot drop inspection mode,” scanning for traps, weird spells, cursed items, or anything that would explode in production.
That’s when it hit me:
AI didn’t make the problem easier. It just shifted the hard part.
Because verifying code is harder than writing code.
Your brain is pattern-matching instead of creating.
You’re constantly thinking, “Wait, why does this function accept three arguments?”
And, “Why is this error message oddly polite?”
And, “Hold on does this even run?”
So if we’re not careful, we become approval engines.
We stop thinking deeply and start rubber-stamping outputs that “look right.”
But the solution isn’t avoiding AI.
It’s using it intentionally:
- Ask AI to explain its reasoning before generating code.
- Request a step-by-step plan and review the plan, not the code blob.
- Rewrite complex pieces yourself after reading the AI version.
- Use AI for debugging or exploration, not full-blown autopilot.
AI should feel like pairing with a senior engineer, not babysitting one.

My turning point: AI didn’t replace my skills it revealed them
The guilt didn’t disappear because I read some inspirational thread on X or watched a productivity guru proclaim that “AI frees us to focus on higher-level thinking.”
No.
It clicked for me during a refactor that went horribly sideways.
I inherited a codebase that looked like someone built a monolith in a dream and woke up halfway through. Random utils, mutated state everywhere, functions with names like processThing() that processed absolutely nothing.
I asked AI for help out of desperation, not laziness.
Claude didn’t just spit out code.
It explained the architecture back to me clearly, patiently, and with the kind of calm that only a non-human can muster. It highlighted circular dependencies I had missed. It pointed out an async race condition hiding behind a misleading variable name. It showed me a cleaner pattern that made the whole module click.
And that’s when it hit me:
AI is only as good as the questions I ask.
If I misframed the problem, the output was garbage.
If I gave sloppy instructions, the solution was sloppy.
If I didn’t understand the domain, it hallucinated entire systems that didn’t exist.
But when I understood the shape of the problem when I had a mental model AI became a force multiplier.
Not a crutch.
Not a cheat.
A teammate.
It’s like co-op mode in a tough RPG:
If you know the mechanics, your partner helps clear rooms faster.
If you don’t, they drag you into boss fights you weren’t remotely prepared for.
I realized something uncomfortable but freeing:
AI doesn’t hide your weaknesses it exposes them.
It amplifies strong thinking and punishes shallow understanding.
It rewards clarity, context, and domain knowledge.
It forces you to articulate decisions instead of winging them.
That’s not cheating.
That’s growth.
A healthier mindset
AI guilt isn’t about code it’s about identity.
Developers were raised on this idea that “real engineers” brute-force everything with raw brainpower, Vim wizardry, and the ability to mentally simulate a whole system like a CPU with feelings.
But real dev work has always been about tools.
We used frameworks, generators, docs, libraries, and StackOverflow threads older than some interns. Nobody called that cheating.
Once I stopped treating typing as a personality trait, everything clicked.
AI wasn’t stealing my craft it was removing the boring parts of it.
And with that extra mental space, I could finally focus on the things that actually matter: structure, clarity, tradeoffs, tests, and not repeating the same boilerplate for the hundredth time.
The healthier mindset is simple:
- Value comes from decisions, not keystrokes
- Leverage beats purity
- And “I understand why this works” > “I typed it manually”
We didn’t lose anything.
We just leveled up.
So… what now?
AI speeds up everything except understanding and that’s the part only humans can do.
Ask any senior engineer what their real job is. It’s not typing. It’s making calls.
Tradeoffs. Architecture patterns. Debugging instincts. Knowing when an answer is technically correct but practically awful.
AI can draft code, but it can’t feel the consequences of a bad decision six months later when you’re on-call staring at logs that look like static.
So use AI like a co-op teammate:
quick prototypes, faster debugging, more experiments.
You’re still steering. You’re still responsible. You’re still the one who knows what “good” looks like.
The game didn’t get easier it just got bigger.
Conclusion
If AI writing half your code feels like cheating, remember this:
our entire industry has been “cheating” for decades.
Frameworks, libraries, CLIs, generators all shortcuts.
The real job was never about typing.
It was understanding problems, designing good systems, making wise tradeoffs, and making sure your future self doesn’t curse your name.
AI doesn’t replace that.
It amplifies it.
The only risky move now is pretending the future is going backward.
The devs who learn to wield AI with intention will build faster, think clearer, and spend less time lost in boilerplate.
If that’s cheating, fine call me guilty.
But I’m not giving back the superpowers.
Helpful resources
- GitHub Copilot docs: https://docs.github.com/en/copilot
- Claude for coding: https://claude.ai
- Cursor AI editor: https://cursor.sh
- OpenAI Dev Docs: https://platform.openai.com/docs
- Refactoring Guru (design patterns visualized): https://refactoring.guru/design-patterns
Top comments (0)