I recently saw a meme about terrible legacy code on platform X and it gave me an idea for a discussion topic.
A year ago, the classic developer question was:
“What’s the worst code you’ve ever seen?”
But our day-to-day work has changed. Maybe the real question now is:
“What is the worst suggestion AI has ever given you?”
I’ll start.
I’ve happily survived plenty of questionable code: from “fast hotfixes” that didn’t even touch the root cause, to refactorings that added more complexity than my 15-years-younger self on OOP steroids.
But this happened about a year ago and it still sticks in my mind:
API key in a public Docker image
I was working on a GitHub Action that builds a Docker image from my .NET REST API, pushes it to Docker Hub as a public image and then deploys it to Azure. Pretty straightforward, right?
There was one small catch: the API uses a private API key to communicate with a third-party service.
So I asked ChatGPT for advice.
Its suggestion:
“You can store this API key as an environment variable in your Docker image.”
Wait… what?
Put a private API key inside a public Docker image?
To be clear, environment variables themselves are fine.
The problem was baking the secret into the image during build time, which would expose it to anyone pulling or inspecting the public image.
I explained this to ChatGPT.
It responded with the classic:
“You are right!”
…and suggested storing it securely in Azure.
End of the story?
Of course not.
Just a few messages later, in the same context window, ChatGPT again suggested putting the private API key into the public Docker image as an environment variable.
That was the moment I realized:
AI isn’t production-ready yet, at least for security advice. 😄
The interesting shift
We used to review junior developers’ code carefully.
Now we also need to review code written by something that sounds like a senior engineer but occasionally behaves like an intern on their first day.
Discussion
- What’s the most ridiculous suggestion AI or ChatGPT has ever given you?
- Do you review AI-generated code differently than human-written code?
I’d love to hear real examples from the community.
Top comments (23)
"Now I have the full picture"
not advice but the most misleading line.
"Why this works" adding overconfident statements about untested code recommendations.
AI recently told me I could build an ollama image generator for the web. Since I didnt have an M chip, my worthy nvidia graphics card was useless. Useless I say! - shakes old lady stick - 😂
Well, I guess you’ll have to sell your ancient Nvidia card and get an M chip. Good advice is priceless 😂😂
Your absolutely right!
AI is often consoling me about how frustrating my problem is, even simple stuff I just want an answer for. Not sure that counts but I don’t like it
Of course it does 😄. For simple questions, I just want a direct answer, not the emotional support intro.
I don't remember the worst, but one of the most frustrating and stubborn one was Perplexity insisting on misattributed facts and claiming my phone wouldn't support eSIM, gaslighted for half an hour feeling like the astronauts arguing with HAL in Kubrick's 2001 movie.
I seriously distrust AI-generated code and look at it more suspiciously than I ever did in human code reviews. I guess I keep scanning for an unconscious gut feeling list of telltale signs of AI hallucination, uncommon patterns or illusion of completeness. AI is often 90% correct, but missing out on the crucial 10%.
“Astronauts arguing with HAL” is painfully accurate 😄
Yep, totally agree. I’ve also started becoming more suspicious when implementing something I’m not fully comfortable with. If AI suggests something new that I haven’t seen before, I’ve started asking it to explain the reasoning and then verifying it by checking documentation or searching online/Stack Overflow to confirm it’s actually correct.
Recently, I was setting up a new PC with the help of AI, but I couldn’t finish a specific setting. I spent a lot of time trying to configure it, but in the end, I realized there was no need to set it up at all. I might still be trying if I hadn’t googled and checked it myself. 😅
This happens to me every time, even without AI. 😅🤣🤣
Haha! That’s funny! 🤣
Maybe that context window just didn’t like you? xDDD
Worst advice for me? Maybe "Just communicate your feelings openly and everything will work out." 🤣🤣🤣
Yeah, that’s what I get for asking ChatGPT stupid questions. 😅
Sounds like the plot of a low-budget romantic movie. 🤣🤣
it sometimes confidently patches stuff inside my node_modules, certain it's going to fix that flaky test.
lol 😆 nice one
Mine was letting an AI agent autonomously manage DNS records. It decided to 'optimize' by removing what it thought were redundant A records. Took down three websites for 2 hours before we noticed 😅 Lesson learned: AI automation needs guardrails, especially for infrastructure. Now we have a mandatory human-approval step for anything that touches DNS or production configs.
That’s a great lesson learned the hard way 😅
Downgrading my esp32 sdk over Arduino IDE to fix encryption lib related bugs, whereas these just result from Arduino IDE getting confused. Resulted in bricking OTA, and a lot of mess.
Bit niche but what can I say. Since then I only use Gepetto for translation when talking with online sellers.
As to why I still use Arduino IDE? Reasons
"My smoke tests pass. Now I am sure the latest patch should address the bug we were seeing."
That’s confident. I’d definitely trust it. 😅
Some comments may only be visible to logged-in visitors. Sign in to view all comments.