How many Rs are there in the word strawberry? AI can’t tell you. Apparently. You’ve all seen it. Screenshots, Reddit threads, smug tweets. Models tripping over letters like toddlers. Everyone pointing and laughing. Reassuring stuff.
Wind the clock back a little.
It’s 2023. Image generation is exploding. It’s magical. Also: why does that hand have five fingers and a thumb?
A year later and we’ve uncovered a new, devastating limitation. AI cannot render a wine glass completely full. Half the internet concludes: preposterous technology, case closed.
By 2025 things are truly dire. Models still can’t reliably count the Rs in strawberry. Ask for a seahorse emoji and they spiral into what looks suspiciously like an existential crisis.
These examples don’t matter. Not really.
What’s interesting is how obsessively we return to them.
It Will Never Be Able To Code Though
The memes are obvious if you use AI regularly. But this reflex isn’t limited to casual users. Technical people do it too and often more loudly.
Early 2023: ChatGPT can spit out a half-decent for loop. Sometimes it even answers technical questions correctly. Incredible. But obviously it can’t build an app.
Late 2024: we’ve got basic code-generation tools. Still, no danger. It makes too many mistakes. Barely junior level.
2025: the year of the vibe coder. Suddenly everyone can spin up a website. Sure, it’s riddled with security holes and questionable decisions. So again: no threat. We’ll just clean it up. AI is junk.
For years now, we’ve watched models repeatedly blow past their previous ceilings. Each time, the criticism simply slides sideways to the next obvious limitation.
Reddit is still full of people pointing out how stupid AI is. They’re not wrong. They’re just always late and missing the important part.
Why Is AI Stupid Though?
Before getting philosophical, it’s worth grounding this in reality. These glitches exist for reasons. If you’re building with AI, you need to understand them.
How often have you seen a photograph of a wine glass filled perfectly to the brim? Until recently: almost never. That means that the model hasn’t either. It’s not failing, it’s interpolating from a deeply human dataset.
Why do seahorse emojis cause chaos? Because at some point the internet collectively decided a seahorse emoji existed. Reddit talked about it. Joked about it. Imagined it. The model learns that seahorse emoji is plausible and goes to insert it. Then, mid-generation, it realizes that it doesn’t exist and starts chasing its own tail ad-infinitum.
Why does AI-generated code contain errors? Because it’s trained on Stack Overflow, blogs, gists, half-finished examples and heroic hacks. You didn’t ask it to be secure. You didn’t constrain it. It’s doing exactly what humanity taught it to do.
People say AI is a mirror to the user. It’s also a mirror to humanity… and a lot of what we’re seeing reflected back isn’t flattering.
Why Does It Matter?
Because this isn’t abstract. It has real consequences – for society and for anyone building real products with AI baked in. If you’re developing on top of AI and you don’t understand how it fails, you’re already in trouble.
At Brunelly we assume AI is an intern who found a 20-year-old Stack Overflow answer and ran with it. We prompt heavily, guide explicitly, and still don’t trust the output. Everything passes through multiple agents to surface bugs, performance issues, and security concerns.
The only viable starting point is: it will underperform… so how do we correct it?
But this misunderstanding goes wider than product design.
Stack Overflow is effectively dead. Let that sink in. Once the backbone of developer knowledge, now barely visited. Why? Because ChatGPT gives faster, better, contextual answers.
Music, images, stock photography – already flooded. Half of the lo-fi playlists on Spotify are AI-generated. We just stopped calling it slop.
Remember when everyone complained about AI slop in early 2025? Bad news: it’s still AI. It’s just a lot less sloppy.
Jobs are changing. Trust is changing. Evidence is changing. When you can’t trust photos, videos, reviews or faces then everything downstream shifts with it.
If you’re focused on strawberries, you’re going to wake up one day and wonder when the world quietly re-organised itself.
Why Do We Fixate Though?
Because known failure modes are comforting.
They give us a boundary. Something to point at. Something to laugh at. A place where we still feel safely on top.
Finding a bug in YouTube is annoying. Finding a bug in AI is reassuring.
The problem is that these failures don’t last.
Our mental model of AI already lags reality, and that gap is widening.
Even if AI progress stopped tomorrow, it would take years for organisations to fully exploit what already exists. Orchestration is immature. Skills are scarce. Understanding is shallow.
This isn’t about whether LLMs lead to AGI or consciousness. It doesn’t matter. The systems we already have are enough to reshape everything if we actually learn how to use them.
What Does It Mean For Builders?
It means stability is gone. The model you used last month is obsolete. The workaround you wrote last week no longer applies. Every solved edge case is replaced by three new ones.
This isn’t like JavaScript frameworks. This is orders of magnitude faster.
You have to design for an environment that mutates continuously. Trust becomes a UX problem, not a marketing one. AI labels actively reduce confidence.
Textbox-and-send is not a product strategy.
Trust nothing. Convert outputs into constrained state machines. Design experiences that absorb failure gracefully.
We didn’t build Brunelly because AI is magical. We built it because AI is a tool that can be harnessed and nobody else was doing it right. And the orchestrator underneath it evolves almost as fast as the models themselves – because it has to.
And What Does It Mean For All Of Us?
That’s the real question.
I was coding in the 90s during the original internet boom and bust. It wasn’t like this. Code lasted years. Systems were stable. Patterns endured.
This time is different – not because the tech is smarter, but because the pace is relentless.
Laughing at AI’s mistakes is fine. It is funny. But it’s also a distraction.
Assume the world is changing before you notice it.
If you’re building: design for failure, assume the system will outgrow you mid-flight, and plan accordingly.
And maybe stop counting Rs.



Top comments (0)