DEV Community

Cover image for The Hidden Risk of Letting ChatGPT Touch Your Code

The Hidden Risk of Letting ChatGPT Touch Your Code

Michal Harcej on October 08, 2025

I wrote this after spending an entire day fixing the chaos that ChatGPT caused in my live system. This is a real developer’s cautionary tale. ...
Collapse
 
ingosteinke profile image
Ingo Steinke, web developer • Edited

"AI does not have the ability to run the code it generates yet," that's how Anthropic puts it with an overly optimistic "yet". JetBrains AI won't even read console output from npm or eslint even when it's in the console integrated in their IDE.

I keep wondering how so many developers confidently claim that AI writes their code, including complete applications. Complete? Concise? Correct? Working? Maintainable? Maybe my quality standards are just too high. I doubt that I'm really "too dumb or lazy to prompt" when I see other people's lazy prompts (and time-consuming iterations).

After an unusually productive session with Claude, I asked it why this time most of the code was not full of errors and it hadn't been turning in circles from one unlikely suggestion back to another one previously proved wrong already. AI excels when "the exact pattern appears in thousands of tutorials, official docs, and codebases," according to a meta-reflection by Claude Sonnet 4.5, revising its initial claim attributing the success to "70% your questioning approach, 20% problem domain, 10% model quality," when neither my questioning approach nor the model had changed. "The "Common Ground Advantage: React Context + TypeScript - This exact pattern appears in thousands of tutorials, official docs, and codebases. Clear failure modes - TypeScript errors are unambiguous, so incorrect patterns get filtered out in training."

AI predicts text that looks like code.

That's it.

When we start being original and creative and leaving the common ground comfort zone, that's when AI, in its current, LLM-based form, becomes less helpful, wasting our time with unhelpful guesses, misleading approaches and made-up "alternative facts" that don't really exist.

Collapse
 
michal_harcejnanomagic profile image
Michal Harcej (NanoMagic)

Totally agree with you, Ingo — that’s been my experience too.
AI tools perform best when the pattern already exists a thousand times in public code, docs, or tutorials. Once you step into original architecture or uncommon setups, the “predictive” nature of LLMs starts to show — it stops reasoning and starts guessing.

I’ve hit that wall plenty of times. Early IDE integrations felt like magic until I realized most of that “help” was just pattern-matching, not understanding.
The trick is exactly what you said — treat AI as an assistant for well-trodden ground, but keep full control once you move into the creative or system-specific parts.

Collapse
 
prahladyeri profile image
Prahlad Yeri

I follow a basic programming rule to avoid this exact scenario: never include ChatGPT-generated code in your projects before thoroughly reading and understanding it yourself.

In the broader AI journey, present-day LLMs aren’t even baby steps - they’re more like glorified content filters with multiple layers. They save you the trouble of digging through Google or Stack Overflow for a solution, but don’t expect them to do much beyond that.

Given their current capabilities, AI assistants are the most suitable use case. Copilots and autonomous agents are attempting to handle much more than they can realistically “chew.”

Collapse
 
michal_harcej profile image
Michal Harcej

Absolutely agree 👍​

Collapse
 
kailera profile image
Samuel Ferreira da Costa

I learn this with the worst case possible. Really, we are devs, GPT doesn't.

Collapse
 
michal_harcej profile image
Michal Harcej

Share your story, Samuel. May everyone learn from your shared experiences.

Collapse
 
kailera profile image
Samuel Ferreira da Costa

I get a client with a short deadline to a project, so i decide to use Codex to review and to create critical functions for the project, hoping to meet the deadline. Conclusion: The project became extremely polluted, with non-functional/clean code, far from good practices, and unfortunately, I lost the client and a good opportunity. I am a good developer, but at that moment I forgot this: easy come, easy go.

Collapse
 
a-leks profile image
Aleksei

Good read and something very relatable. Have stepped on that landline quite a few times, especially during the early stages of AI integrations in IDEs, instead of improved velocity ended up with tons of headache.

AI is just a tool, you have to learn how to use it and the more you do the better you become at it, I learned it the hard way, hopefully this post will help at least few people to grasp the dangers and work with AI more thoughtfully without assumptions that it is some all-knowing uber-intelligence

Collapse
 
michal_harcejnanomagic profile image
Michal Harcej (NanoMagic)

Exactly — couldn’t agree more.
It really comes down to how we use the tool, not whether the tool itself is “good” or “bad.” Early on, I treated AI like a junior dev who just needed the right prompt — but it’s more like a super-autocomplete that occasionally hallucinates confidence.

Once you shift from “trusting” it to collaborating with it — verifying, testing, and using it to expand perspective instead of outsource thinking — it becomes genuinely valuable.
The hard lessons you mentioned are the same ones that end up teaching the best practices.

Collapse
 
kurealnum profile image
Oscar

it predicts text that looks like code.

That's what I've been trying to tell people for... years now? You hit the nail on the head with that one.

Collapse
 
nandang profile image
Nandan

Its like someone giving counselling to all the devs out there using these ai tools 😂

Collapse
 
michal_harcejnanomagic profile image
Michal Harcej (NanoMagic)

Haha, true! 😄
Feels a bit like group therapy for developers who’ve been burned by “helpful” AI suggestions.
But honestly, it’s the kind of counseling we all need — a reminder that these tools can boost us or break us depending on how we use them.

Collapse
 
m_maksimovic_ profile image
Milica Maksimovic

Thankfully we can now use AI to fix AI - dev.to/qatech/vibe-coding-meets-ai...

Collapse
 
michal_harcejnanomagic profile image
Michal Harcej (NanoMagic)

That’s an interesting angle — using AI to fix AI feels a bit like teaching the mirror to notice its own reflection. 😄
In some ways it works — AI can definitely help detect inconsistencies, optimize patterns, or catch what humans might overlook.
But the real progress comes when there’s still a human in the loop, guiding the context and sanity-checking the output. Otherwise, it’s just one model confidently correcting another’s imagination.