DEV Community

Cover image for Stop Calling LLMs AI
Nikolaos Sagiadinos
Nikolaos Sagiadinos

Posted on

Stop Calling LLMs AI

TL;DR

This article explains why LLMs are useful tools, but calling them "AI" is dumb marketing bullshit that leads to bad decisions, wasted money, and unrealistic expectations.

Hypes are Everywhere

When I scroll through LinkedIn, Reddit, or various blogs, I always read the same annoying phrases about our future: AI will soon replace nearly every engineer, will change the world, or, depending on who's talking, enslave us.

It's Just Marketing, Stupid

AI is just a marketing slogan. Because this is what marketing does. It shouts loud and sells smoke and mirrors.

What we call AI technically is only a simulation of "intelligence."

Take Large Language Models, for example. Not magic, not conscious, not smart.

  • Large: the immense amount of data it has been trained on.
  • Language: the primary function is to process and generate human language.
  • Model: it is a statistical model, a computational framework, not a living entity.

Same Story with Image Generators

I know it is getting boring, but nothing has changed and nothing will change: the so-called AI trained on the world's knowledge and draws people with three or four fingers.

I still get this in 2025 with different tools like Dall-E, Gemini, Stable Diffusion, Flux etc. Especially if you need content that is more complex than a simple portrait of a cute smiling person.

Different tech, same limitation: they're pattern-matching machines, not intelligent creators.

Intelligence is a Loaded Term

"Intelligence" in humans is multifaceted, encompassing reasoning, problem-solving, creativity, emotional understanding, social skills, and more.

An LLM can perform certain tasks that might be considered "intelligent" (like generating coherent text, answering questions, or even writing code). But this "intelligence" is narrow, specific to the data it's trained on and the algorithms that govern the responses.

Language models lack common sense, intuition, and the ability to truly learn from experience in the same way a human does. Their "understanding" is statistical, not conceptual.

That's why you get hallucinated facts, broken code, and three-fingered portraits. Sure, there might be workarounds, but you can't patch away a fundamental limitation. These aren't bugs. They're features of a system that mimics understanding without actually having any.

Personal Experiences

I have been working with LLMs for years and also use them as support in my current garlic-hub project.

They have advantages in researching, translations, summaries, code explanations, concepting, prototyping, and documentation, but not for writing production code.

Even with precise prompts, you’ll get too often inconsistent, crappy code that breaks the moment you touch it.

Let's try to write unit tests based on your own classes when they consist of more than getters and setters.

Among other oddities, you will find the following:

  • testing of private methods
  • mocking of inherited methods
  • even mocking the testing class
  • inconsistency in the naming, although prompts are specific

Let's be clear: unit tests are designed to test isolated, manageable pieces of code. If a tool marketed as "intelligent" can't even write tests for a single class reliably, how is it supposed to automate software development or rule the world?

But You Need to Use it Right

If something doesn't work as expected, the typical response is: "You're using it wrong."

The AI evangelists insist you need to change how you program; stop being a programmer, start being a software engineer.

Their vision: create detailed documentation of the modules you need and let the agent do the work, including unit tests.

If something's buggy or needs to change, you update the documentation, let the agent regenerate the code, and review again. They call this "vibe coding," and according to them, it saves tons of time.

Sounds reasonable at first glance? Let's dig into that.

The Time Saving Myth

Of course, starting something new leads to fast results. At first! Eventually, it consumes more and more time.

Why?

  • Technical debt: AI-generated code is often suboptimal, buggy, difficult to maintain, poorly documented, insecure, or not scalable. It just works is not enough.
  • Debugging and correction: Debugging, fixing bugs, or adding new features to this code base later on is more labor-intensive than writing clean code manually.
  • Lack of understanding: The reality is that people might review the first times, but as it seems to work as expected at first, often they blindly adopt code without understanding the implementation decisions. This makes maintenance and changes a high risk.
  • Redevelopment: If the system needs to grow or scale, vibe code can become so unusable that a complete redevelopment is necessary – negating all the initial time savings.

The immediate productivity gains (rapid prototyping) are essentially a “high-interest loan on the future” of the code base, which will later become due in the form of high maintenance costs.

History is Repeated

The same often happens with using 3rd-Party libs like in Node.js or frameworks. First, they save you time, but creating software is only the start. Maintaining in the long run is the real challenge.

Remember the no-code circus? Similar crappy promises, different label, same dead end.

Perceived Efficiency Study

An interesting METR study shows: Experienced developers only think they are 24% faster with AI, but they're actually about 20% slower

If you ask AI Bros about this, they will yell that these study is based on wrong assumptions, and you need to learn much about how to use vibe coding. Remember this "You Need to Use it Right"?

But marketing talks only about "intelligence". Every AI coding company tells us how easy their tools are to use and advertise "reviews" of people who claim to realize huge projects in a few days without even programming skills.

But at the moment you start measuring a tool according to the promises, and it fails, the excuses come suddenly: You need to dig into endless hours and months of learning.

So what's the conclusion? You should spend months studying to achieve a rather questionable time saving that is more felt than real?

You will never read this from the AI-Bros.
What you also never read is about quality assurance and code maintenance.

The Junior Developer Problem

Another point is often ignored, too. To review code, you need skills, years of experience and maybe some fails.

If "vibe coding" becomes standard, how do junior developers learn their craft? You can't learn to review code you never learned to write. We'd be creating a generation of developers who can prompt but can't program. They'd be like consultants without expertise.

LLM Training Reaches Limits

The so-called revolution from LLM is based on scraping, storing and searching huge amounts of data. What the ignorant world calls AI is just a highly trained auto-complete feature.

But there's only one Stack Overflow, one YouTube, one Wikipedia to scrape. We've already done that, and now we're running out of training material. The Conversation, Business Insider

The widely propagated solution? Use synthetic data, which means: LLMs generated data to train other LLMs. Nvidia is already building a synthetic data generator called Nemotron

Meanwhile, more and more text on the web is LLM-generated, and it's nearly impossible to distinguish machine generated text from human written content.

How hard is it to see where this leads? Training AI on AI-generated content will inevitably cause model collapse and degeneration?

AI Bros do not discuss this and companies chasing investor money keep pushing their "revolution" at all costs.

Self-Driven Hype

Unfortunately, this mix of agitators, marketers, and their uncritical followers has created a relentless hype cycle.

Plenty of mediocre ex-managers and corporate has-beens crave public visibility. Some are paid to push a narrative; others genuinely see themselves as visionaries who deserve an audience.

All of them open their mouths to fill the air with empty, meaningless words. Their followers, in turn, are simply cheerleading. If you express skepticism, the retort is often: "Hey, Bill Gates is a billionaire. He knows what is going on." No, he knew nothing. He's speculating just like thousands of others. But for some reason, people trust vague predictions from wealthy people as if wealth equals expertise.

This creates a self-fulfilling prophecy: shortsighted decision-makers fear missing out if they don't jump on the AI bandwagon. So they start slapping "AI" on everything, whether it makes sense or not.

In the end, the hype just fuels a reckless, resource-guzzling industry that burns cash and electricity.

More Absurdities from My Industry

My home is the digital signage industry and while others face similar nonsense, here it’s downright absurd.
Companies constantly invent pointless "features" just to slap 'AI' somewhere on their landing pages.
Like

  • Display chatbots: because apparently people want to have conversations with a screen showing ads, instead of just reading the information.
  • "AI-optimized" ad placement: algorithms that optimize for metrics nobody asked for, ignoring actual campaign goals. The AI decides what's "best," not your business objectives.
  • Predictive Audience AI: tools that claim to forecast audience behavior and generate impressive projections based on laughably small datasets. Garbage in, "insights" out.
  • Age/gender estimation: computer vision that's been around for years, now rebranded as "AI-powered." It fails in noisy environments, struggles with viewing angles, and provides data that's rarely actionable anyway.

None of this solves real problems. New software features should be driven by customer pain points, not by marketing departments desperate for buzzwords.

Whenever you see a digital signage company shouting about AI, remember this: it’s not built to help you or your business. It’s built to sell you.

Stop Calling LLMs AI

I'm not worried about a Terminator future. There's no technical singularity coming from this approach.

This industry is now running into problems, because they've already scraped most available data.

Plans to generate synthetic data to train will lead to degeneration.

An LLM is a useful tool, that impressively simulates human behavior. No more, no less. It can perform tasks that might seem intelligent.

But their capabilities are just reflections of data and algorithms and it is not a sign of true consciousness or human-like intelligence.

Here's the reality: LLMs are calculators that learned to talk. Nobody panicked that calculators would replace mathematicians. Stop pretending autocomplete will replace engineers.

Top comments (0)