TL;DR
This article explains why LLMs are useful tools, but calling them "AI" is dumb marketing bullshit that leads to bad decisions, wasted money, and unrealistic expectations.
Hypes are Everywhere
When I scroll through LinkedIn, Reddit, or various blogs, I always read the same annoying phrases about our future: AI will soon replace nearly every engineer, will change the world, or, depending on who's talking, enslave us.
It's Just Marketing, Stupid
AI is just a marketing slogan. Because this is what marketing does. It shouts loud and sells smoke and mirrors.
What we call AI technically is only a simulation of "intelligence."
Take Large Language Models, for example. Not magic, not conscious, not smart.
- Large: the immense amount of data it has been trained on.
- Language: the primary function is to process and generate human language.
- Model: it is a statistical model, a computational framework, not a living entity.
Same Story with Image Generators
I know it is getting boring, but nothing has changed and nothing will change: the so-called AI trained on the world's knowledge and draws people with three or four fingers.
I still get this in 2025 with different tools like Dall-E, Gemini, Stable Diffusion, Flux etc. Especially if you need content that is more complex than a simple portrait of a cute smiling person.
Different tech, same limitation: they're pattern-matching machines, not intelligent creators.
Intelligence is a Loaded Term
"Intelligence" in humans is multifaceted, encompassing reasoning, problem-solving, creativity, emotional understanding, social skills, and more.
An LLM can perform certain tasks that might be considered "intelligent" (like generating coherent text, answering questions, or even writing code). But this "intelligence" is narrow, specific to the data it's trained on and the algorithms that govern the responses.
Language models lack common sense, intuition, and the ability to truly learn from experience in the same way a human does. Their "understanding" is statistical, not conceptual.
That's why you get hallucinated facts, broken code, and three-fingered portraits. Sure, there might be workarounds, but you can't patch away a fundamental limitation. These aren't bugs. They're features of a system that mimics understanding without actually having any.
Personal Experiences
I have been working with LLMs for years and also use them as support in my current garlic-hub project.
They have advantages in researching, translations, summaries, code explanations, concepting, prototyping, and documentation, but not for writing production code.
Even with precise prompts, you’ll get too often inconsistent, crappy code that breaks the moment you touch it.
Let's try to write unit tests based on your own classes when they consist of more than getters and setters.
Among other oddities, you will find the following:
- testing of private methods
- mocking of inherited methods
- even mocking the testing class
- inconsistency in the naming, although prompts are specific
Let's be clear: unit tests are designed to test isolated, manageable pieces of code. If a tool marketed as "intelligent" can't even write tests for a single class reliably, how is it supposed to automate software development or rule the world?
But You Need to Use it Right
If something doesn't work as expected, the typical response is: "You're using it wrong."
The AI evangelists insist you need to change how you program; stop being a programmer, start being a software engineer.
Their vision: create detailed documentation of the modules you need and let the agent do the work, including unit tests.
If something's buggy or needs to change, you update the documentation, let the agent regenerate the code, and review again. They call this "vibe coding," and according to them, it saves tons of time.
Sounds reasonable at first glance? Let's dig into that.
The Time Saving Myth
Of course, starting something new leads to fast results. At first! Eventually, it consumes more and more time.
Why?
- Technical debt: AI-generated code is often suboptimal, buggy, difficult to maintain, poorly documented, insecure, or not scalable. It just works is not enough.
- Debugging and correction: Debugging, fixing bugs, or adding new features to this code base later on is more labor-intensive than writing clean code manually.
- Lack of understanding: The reality is that people might review the first times, but as it seems to work as expected at first, often they blindly adopt code without understanding the implementation decisions. This makes maintenance and changes a high risk.
- Redevelopment: If the system needs to grow or scale, vibe code can become so unusable that a complete redevelopment is necessary – negating all the initial time savings.
The immediate productivity gains (rapid prototyping) are essentially a “high-interest loan on the future” of the code base, which will later become due in the form of high maintenance costs.
History is Repeated
The same often happens with using 3rd-Party libs like in Node.js or frameworks. First, they save you time, but creating software is only the start. Maintaining in the long run is the real challenge.
Remember the no-code circus? Similar crappy promises, different label, same dead end.
Perceived Efficiency Study
An interesting METR study shows: Experienced developers only think they are 24% faster with AI, but they're actually about 20% slower
If you ask AI Bros about this, they will yell that these study is based on wrong assumptions, and you need to learn much about how to use vibe coding. Remember this "You Need to Use it Right"?
But marketing talks only about "intelligence". Every AI coding company tells us how easy their tools are to use and advertise "reviews" of people who claim to realize huge projects in a few days without even programming skills.
But at the moment you start measuring a tool according to the promises, and it fails, the excuses come suddenly: You need to dig into endless hours and months of learning.
So what's the conclusion? You should spend months studying to achieve a rather questionable time saving that is more felt than real?
You will never read this from the AI-Bros.
What you also never read is about quality assurance and code maintenance.
The Junior Developer Problem
Another point is often ignored, too. To review code, you need skills, years of experience and maybe some fails.
If "vibe coding" becomes standard, how do junior developers learn their craft? You can't learn to review code you never learned to write. We'd be creating a generation of developers who can prompt but can't program. They'd be like consultants without expertise.
LLM Training Reaches Limits
The so-called revolution from LLM is based on scraping, storing and searching huge amounts of data. What the ignorant world calls AI is just a highly trained auto-complete feature.
But there's only one Stack Overflow, one YouTube, one Wikipedia to scrape. We've already done that, and now we're running out of training material. The Conversation, Business Insider
The widely propagated solution? Use synthetic data, which means: LLMs generated data to train other LLMs. Nvidia is already building a synthetic data generator called Nemotron
Meanwhile, more and more text on the web is LLM-generated, and it's nearly impossible to distinguish machine generated text from human written content.
How hard is it to see where this leads? Training AI on AI-generated content will inevitably cause model collapse and degeneration?
AI Bros do not discuss this and companies chasing investor money keep pushing their "revolution" at all costs.
Self-Driven Hype
Unfortunately, this mix of agitators, marketers, and their uncritical followers has created a relentless hype cycle.
Plenty of mediocre ex-managers and corporate has-beens crave public visibility. Some are paid to push a narrative; others genuinely see themselves as visionaries who deserve an audience.
All of them open their mouths to fill the air with empty, meaningless words. Their followers, in turn, are simply cheerleading. If you express skepticism, the retort is often: "Hey, Bill Gates is a billionaire. He knows what is going on." No, he knew nothing. He's speculating just like thousands of others. But for some reason, people trust vague predictions from wealthy people as if wealth equals expertise.
This creates a self-fulfilling prophecy: shortsighted decision-makers fear missing out if they don't jump on the AI bandwagon. So they start slapping "AI" on everything, whether it makes sense or not.
In the end, the hype just fuels a reckless, resource-guzzling industry that burns cash and electricity.
More Absurdities from My Industry
My home is the digital signage industry and while others face similar nonsense, here it’s downright absurd.
Companies constantly invent pointless "features" just to slap 'AI' somewhere on their landing pages.
Like
- Display chatbots: because apparently people want to have conversations with a screen showing ads, instead of just reading the information.
- "AI-optimized" ad placement: algorithms that optimize for metrics nobody asked for, ignoring actual campaign goals. The AI decides what's "best," not your business objectives.
- Predictive Audience AI: tools that claim to forecast audience behavior and generate impressive projections based on laughably small datasets. Garbage in, "insights" out.
- Age/gender estimation: computer vision that's been around for years, now rebranded as "AI-powered." It fails in noisy environments, struggles with viewing angles, and provides data that's rarely actionable anyway.
None of this solves real problems. New software features should be driven by customer pain points, not by marketing departments desperate for buzzwords.
Whenever you see a digital signage company shouting about AI, remember this: it’s not built to help you or your business. It’s built to sell you.
Stop Calling LLMs AI
I'm not worried about a Terminator future. There's no technical singularity coming from this approach.
This industry is now running into problems, because they've already scraped most available data.
Plans to generate synthetic data to train will lead to degeneration.
An LLM is a useful tool, that impressively simulates human behavior. No more, no less. It can perform tasks that might seem intelligent.
But their capabilities are just reflections of data and algorithms and it is not a sign of true consciousness or human-like intelligence.
Here's the reality: LLMs are calculators that learned to talk. Nobody panicked that calculators would replace mathematicians. Stop pretending autocomplete will replace engineers.


Top comments (24)
Yeah fr the hype around “AI” feels way louder than what these tools actually do.
I mostly just use AI day-to-day, and they’re great for explanations and ideas, but I’ve never felt like they could replace real engineering.
People talk like we’re 5 minutes away from replacing devs… meanwhile half the time you still have to fix what it generates...
Useful? Absolutely.
Actual “intelligence”? Eh, feels like marketing more than reality.
I do not use AI tools. I use LLM-Tools. lol
Ah, if only companies actually added new “AI” features! Sometimes they just rebrand their old features as AI — I know a thing or two about that ;)
I also absolutely agree when it comes to juniors and vibe coding. I won’t say vibe coding is always bad — it works great for prototyping.
But sooner or later, someone still has to clean up the mess afterwards...
That final sentence gives me a major headache. You've hit the nail on the head regarding the danger of "prototypes becoming production."
This isn't a new problem; it's an accelerated version of existing technical debt.
I know of software companies stuck on ancient Node.js packages precisely because the code became so fragile that any library update would lead to disaster.
The same disaster will sure happen when those LLM-generated "prototypes" migrate to production.
AI is the correct terminology because language changes -- enough people refer to LLMs as AI that it's now AI, whether someone likes it or not.
This post feels so dismissive. The truth (as I see it) is that these tools are genuinely useful for some tasks, overhyped for others, and we're still figuring out where the boundaries are. Pretending they're "just" anything misses that complexity.
It is not a matter of like or unlike. It is wrong. Period. LLMs have nothing to do with intelligence.
Of course, you can create a new sweet bread and brand it cake.
Maybe you can convince people using your wording. But again: It is technically wrong. It is bread, not cake.
Why it is so difficult using correct naming for things?
We are talking about an interesting technology, but not a revolution. Just some over the years optimized search algorithms on powerful hardware.
You are right. We need to find out what is possible. Only the boundaries are clear (for me), because LLMs have natural limitations.
The absurdity of naming is the reason for overhyping, overexpectations, and the predicting silly things which are not possible.
Devs will not be replaces so fast and a Terminator future will not come to reality because of LLMs.
AI is the new HTML programming. :-D
Sure, technically it’s not “intelligence” in the cognitive-scientific sense. But if something behaves intelligently enough to replace cognitive labor, society will call it AI, period -- as we are seeing right now.
Before AI (or LLM), learning a new programming language, building an app, and getting it into the App Store would take weeks or months. Over two weekends recently, I built my first iOS app in Swift (a language I’d never used) and released it successfully with a few hundred users. I also had an AI generate a custom WordPress plugin that worked perfectly the first time -- I never even touched the code.
That’s not just an interesting technology; it changes what’s actually possible for one person to do. How is that not revolutionary?
(Edited for formatting and a missing line ending.)
Full ack. The box of the pandora is open and people will continue to AI wrong. This is a nerd discussion, and I would like that at least professionals and journalists try to use the correct technical wording (LLM).
Because in this way you do not understand the languages and OS concepts. I needed years to understand programming and the concepts of every language.
You just produced something that works.
Believe me or not: That is not enough.
I fucked up projects in my past because of missing QA and structure. Without deep understanding / experience of concepts and sustainable quality assurance, you will run into problems sooner or later when your codebase is growing.
It is like learning piano with automatic accompaniment. You will be faster playing something convenient, nice sounding, but you will not be a real pianist if you do not go the hard way and learn to play accompaniment manually.
LLMs help us to learn a language more efficient, because we can discuss. Of course when hallucinations are not too heavy. That is big progress to reading tutorials and stack overflow. But not a revolution, and I would not code productive apps with AI.
Niko, I appreciate your passion, but I think we will have to "agree to disagree" on this. If I ever find myself in Hannover, I'll ping you and we can continue the discussion over a drink or two. ;)
No problem. I can stand other opinions. hahaha.
Thank you for your insights. That is more important than to agree.
... or three ;). Yes, ping me, maybe I will be there too. Although I am more the travel guy.
I think you have misunderstood what AI means. It does not mean an artificial way of being intelligent, it means that the intelligence is artificial. AI isn't intelligent, it just produces results that is intelligent-like.
That looks like a very artistic or let's say interpretation of AI. :) In art, you cannot be wrong because everyone understood it in his way.
For me (I can be wrong) your explanation is based on the misinterpretations.
I faced AI attempts since the 80s, where a software named Eliza runs on my Atari ST.
In my perception, everything I read about AI in the last 40 years implicated that they want to create real intelligent self learning software, not simulations.
Every attempt to create intelligent software failed in the past. Now we have a neuronal transformer technology with machine learning named LLM. LLMs are designed to simulate human behavior. Even the inventors named it generative pre-trained transformers (GPT) and not AI.
I appreciate you taking the time to answer, but I'm still not convinced that you understand what AI is. Eliza is not intelligent, it is more or less just a bunch of if statements. The first computer program which beat a grand master in chess, was not intelligent, it was just a huge computer which could do deep search in a big tree really fast. It just brute-forced a solution, but it still beat a very intelligent person at their own game.
You are right when you say that every attempt to create intelligent software has failed, but in my opinion, the perception that anyone has claimed to have that as a goal, is wrong, and most likely just a product of mainstream media not understanding what they are reporting - as usual. A computer can never have real intelligence. It can learn, yes. It can reason, it can evolve, it can do tasks better than a human, sure, but none of that is a sign of real intelligence - other than that of the creators of the algorithm.
There are especially two things that springs to mind when you are approaching this as if AI is about a machine being intelligent. First; AI is a collective term meaning all systems that mimic intelligent behavior, and that explains why, for instance, Eliza was seen as AI at the time. Second; we don't even know what real intelligence is! How can we decide if a machine is intelligent or not, if we don't even understands what it means to be intelligent. Other than comparing the result to something clever someone did at some point, we have no clue.
In regards to sheer brain power, AI will surpass humans in the not too distant future and we will find ourselves in the sticky situation that we can't figure out if the AI is lying to us or not. But real intelligence? Never. It is just not possible - not unless we totally redefine what intelligence is.
Thank you for your detailed response. I'd like to address a few points that I think deserve closer examination:
Historical Context of AI Goals
The claim that no one has aimed for "real intelligence" doesn't align with the history of AI research. The 1956 Dartmouth Conference, which founded AI as a field, explicitly stated the goal of making machines that could simulate "every aspect of learning or any other feature of intelligence." Many researchers, particularly those working on Artificial General Intelligence (AGI), have indeed pursued systems that could match or exceed human cognitive capabilities across domains and not just mimic intelligent behavior.
The Definition Problem
You state both that "we don't even know what real intelligence is" and that "a computer can never have real intelligence." This creates a logical inconsistency: How can we categorically exclude something whose definition we don't possess? If we cannot define intelligence precisely, we cannot definitively say what does or does not qualify as intelligent.
Learning and Reasoning as Intelligence
You acknowledge that AI can "learn" and "reason" yet dismiss these as signs of intelligence. Many cognitive scientists and philosophers would argue that learning and reasoning are core components of intelligence. If a system demonstrates these capabilities, the question becomes: What additional criteria must be met for it to count as "truly" intelligent?
Simulation vs. Reality
The distinction between "simulating" intelligence and "being" intelligent raises a deeper philosophical question: If we can only judge intelligence by observable behavior and outcomes (since we have no direct access to subjective experience), is the distinction meaningful? This relates to the classic Turing Test debate.
I agree that AI terminology is often misused in media, and that critical thinking about these concepts is essential. However, I think the boundaries are less clear-cut than your post suggests.
While we cannot currently get LLMs to learn as they go, this is a primary area of current research that would unlock another opportunity for growth. It's very clear from Anthropic's work that LLMs create "concept" features in their weights, which are not about words, but about principles and utilise these in various ways that are far more complex than just looking for the next token. I'd say token prediction is what we all have to do when we write, and language is a primary tool used in reasoning... So I'm not writing LLMs off as "auto-complete".
I agree that the definition of intelligence is difficult, but to claim that something that can reason out a problem (as LLMs have now been shown to do by being tested on problems not expressed in training data) must be some level of "intelligence" by practical definition. Human intelligence? No. AGI compared to humans? No, not yet. Artificial Intelligence? Surely yes. I mean, I think my dog is intelligent. He can find things without me showing him where they are, and he's learned how to ask us for food. I consider him intelligent, but I'm not giving him my job anytime soon :)
There is definitely marketing hype surrounding AI, especially in relation to areas that aren't primary research and development; grand claims are made. But I believe there is something real here, too.
I like here the dog analogy, too, but I believe it fundamentally misses the distinction between mimicry and motive.
Yes, LLMs display impressive reasoning patterns (like the concept features you mention), but those features are still abstractions of data structures. There is no genuine understanding or new experience.
The key is not to find a level of "intelligence" where LLMs fit, but to recognize that the architecture itself prevents the kind of adaptive, causal learning your dog excels at.
FINALLY! THANK YOU SO MUCH! I've had just about enough of this AI nonsense.
I agree with most of this, but it also... kinda feels like a semantic argument? Like "Don't call LLMs AI" is like saying "Don't call electric vehicles cars." Yeah, it's way different, but what we call AI isn't the problem. The problem, as you point out, is how we use them. And it feels like almost everyone is using them wrong.
I appreciate the analogy, but I disagree that it's just a semantic argument.
The distinction between LLMs and true AI is crucial because it defines the fundamental limitations of the tool. Your car analogy doesn't quite fit:
An EV is a car because it serves the same function (transportation) and obeys the same physical rules (gravity, friction). An LLM is not intelligence; it is a statistical simulation of language produced by intelligence.
The core problem: The current approach (LLM) is fundamentally designed to be a simulation, a highly trained autocomplete machine. You cannot upgrade a simulation concept into an original.
If we want real general intelligence, we need a completely different conceptual and architectural approach
Common sense articulated expertly!!
Yes, sometimes it is required. 😃
Great article. In my mind, 'AI' is just the next version of the Search engine.
Thank you. And, yes, I see it similar. One cool application of LLM is a search machine addon.
Reality check!
Some comments may only be visible to logged-in visitors. Sign in to view all comments.