DEV Community

Cover image for Stop Calling LLMs AI

Stop Calling LLMs AI

Niko Sagiadinos on November 05, 2025

TL;DR This article explains why LLMs are useful tools, but calling them "AI" is dumb marketing bullshit that leads to bad decisions, was...
Collapse
 
itsugo profile image
Aryan Choudhary

Yeah fr the hype around “AI” feels way louder than what these tools actually do.
I mostly just use AI day-to-day, and they’re great for explanations and ideas, but I’ve never felt like they could replace real engineering.
People talk like we’re 5 minutes away from replacing devs… meanwhile half the time you still have to fix what it generates...
Useful? Absolutely.
Actual “intelligence”? Eh, feels like marketing more than reality.

Collapse
 
sagiadinos profile image
Niko Sagiadinos

I do not use AI tools. I use LLM-Tools. lol

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Ah, if only companies actually added new “AI” features! Sometimes they just rebrand their old features as AI — I know a thing or two about that ;)
I also absolutely agree when it comes to juniors and vibe coding. I won’t say vibe coding is always bad — it works great for prototyping.
But sooner or later, someone still has to clean up the mess afterwards...

Collapse
 
sagiadinos profile image
Niko Sagiadinos

That final sentence gives me a major headache. You've hit the nail on the head regarding the danger of "prototypes becoming production."

This isn't a new problem; it's an accelerated version of existing technical debt.

I know of software companies stuck on ancient Node.js packages precisely because the code became so fragile that any library update would lead to disaster.

The same disaster will sure happen when those LLM-generated "prototypes" migrate to production.

Collapse
 
cosjay profile image
CoSJay

AI is the correct terminology because language changes -- enough people refer to LLMs as AI that it's now AI, whether someone likes it or not.

This post feels so dismissive. The truth (as I see it) is that these tools are genuinely useful for some tasks, overhyped for others, and we're still figuring out where the boundaries are. Pretending they're "just" anything misses that complexity.

Collapse
 
sagiadinos profile image
Niko Sagiadinos

It is not a matter of like or unlike. It is wrong. Period. LLMs have nothing to do with intelligence.

Of course, you can create a new sweet bread and brand it cake.

Maybe you can convince people using your wording. But again: It is technically wrong. It is bread, not cake.

Why it is so difficult using correct naming for things?

We are talking about an interesting technology, but not a revolution. Just some over the years optimized search algorithms on powerful hardware.

You are right. We need to find out what is possible. Only the boundaries are clear (for me), because LLMs have natural limitations.

The absurdity of naming is the reason for overhyping, overexpectations, and the predicting silly things which are not possible.

Devs will not be replaces so fast and a Terminator future will not come to reality because of LLMs.

AI is the new HTML programming. :-D

Collapse
 
cosjay profile image
CoSJay • Edited

LLMs have nothing to do with intelligence.

Sure, technically it’s not “intelligence” in the cognitive-scientific sense. But if something behaves intelligently enough to replace cognitive labor, society will call it AI, period -- as we are seeing right now.

We are talking about an interesting technology, but not a revolution.

Before AI (or LLM), learning a new programming language, building an app, and getting it into the App Store would take weeks or months. Over two weekends recently, I built my first iOS app in Swift (a language I’d never used) and released it successfully with a few hundred users. I also had an AI generate a custom WordPress plugin that worked perfectly the first time -- I never even touched the code.

That’s not just an interesting technology; it changes what’s actually possible for one person to do. How is that not revolutionary?

(Edited for formatting and a missing line ending.)

Thread Thread
 
sagiadinos profile image
Niko Sagiadinos

Sure, technically it’s not “intelligence” in the cognitive-scientific sense. But if something behaves intelligently enough to replace cognitive labor, society will call it AI, period -- as we are seeing right now.

Full ack. The box of the pandora is open and people will continue to AI wrong. This is a nerd discussion, and I would like that at least professionals and journalists try to use the correct technical wording (LLM).

That’s not just an interesting technology; it changes what’s actually possible for one person to do. How is that not revolutionary?

Because in this way you do not understand the languages and OS concepts. I needed years to understand programming and the concepts of every language.

You just produced something that works.

Believe me or not: That is not enough.
I fucked up projects in my past because of missing QA and structure. Without deep understanding / experience of concepts and sustainable quality assurance, you will run into problems sooner or later when your codebase is growing.

It is like learning piano with automatic accompaniment. You will be faster playing something convenient, nice sounding, but you will not be a real pianist if you do not go the hard way and learn to play accompaniment manually.

LLMs help us to learn a language more efficient, because we can discuss. Of course when hallucinations are not too heavy. That is big progress to reading tutorials and stack overflow. But not a revolution, and I would not code productive apps with AI.

Thread Thread
 
cosjay profile image
CoSJay

Niko, I appreciate your passion, but I think we will have to "agree to disagree" on this. If I ever find myself in Hannover, I'll ping you and we can continue the discussion over a drink or two. ;)

Thread Thread
 
sagiadinos profile image
Niko Sagiadinos

No problem. I can stand other opinions. hahaha.

Thank you for your insights. That is more important than to agree.

... or three ;). Yes, ping me, maybe I will be there too. Although I am more the travel guy.

Collapse
 
bronlund profile image
Pal Bronlund

I think you have misunderstood what AI means. It does not mean an artificial way of being intelligent, it means that the intelligence is artificial. AI isn't intelligent, it just produces results that is intelligent-like.

Collapse
 
sagiadinos profile image
Niko Sagiadinos

That looks like a very artistic or let's say interpretation of AI. :) In art, you cannot be wrong because everyone understood it in his way.

For me (I can be wrong) your explanation is based on the misinterpretations.

I faced AI attempts since the 80s, where a software named Eliza runs on my Atari ST.

In my perception, everything I read about AI in the last 40 years implicated that they want to create real intelligent self learning software, not simulations.

Every attempt to create intelligent software failed in the past. Now we have a neuronal transformer technology with machine learning named LLM. LLMs are designed to simulate human behavior. Even the inventors named it generative pre-trained transformers (GPT) and not AI.

Collapse
 
bronlund profile image
Pal Bronlund • Edited

I appreciate you taking the time to answer, but I'm still not convinced that you understand what AI is. Eliza is not intelligent, it is more or less just a bunch of if statements. The first computer program which beat a grand master in chess, was not intelligent, it was just a huge computer which could do deep search in a big tree really fast. It just brute-forced a solution, but it still beat a very intelligent person at their own game.

You are right when you say that every attempt to create intelligent software has failed, but in my opinion, the perception that anyone has claimed to have that as a goal, is wrong, and most likely just a product of mainstream media not understanding what they are reporting - as usual. A computer can never have real intelligence. It can learn, yes. It can reason, it can evolve, it can do tasks better than a human, sure, but none of that is a sign of real intelligence - other than that of the creators of the algorithm.

There are especially two things that springs to mind when you are approaching this as if AI is about a machine being intelligent. First; AI is a collective term meaning all systems that mimic intelligent behavior, and that explains why, for instance, Eliza was seen as AI at the time. Second; we don't even know what real intelligence is! How can we decide if a machine is intelligent or not, if we don't even understands what it means to be intelligent. Other than comparing the result to something clever someone did at some point, we have no clue.

In regards to sheer brain power, AI will surpass humans in the not too distant future and we will find ourselves in the sticky situation that we can't figure out if the AI is lying to us or not. But real intelligence? Never. It is just not possible - not unless we totally redefine what intelligence is.

Thread Thread
 
sagiadinos profile image
Niko Sagiadinos

Thank you for your detailed response. I'd like to address a few points that I think deserve closer examination:

  1. Historical Context of AI Goals
    The claim that no one has aimed for "real intelligence" doesn't align with the history of AI research. The 1956 Dartmouth Conference, which founded AI as a field, explicitly stated the goal of making machines that could simulate "every aspect of learning or any other feature of intelligence." Many researchers, particularly those working on Artificial General Intelligence (AGI), have indeed pursued systems that could match or exceed human cognitive capabilities across domains and not just mimic intelligent behavior.

  2. The Definition Problem
    You state both that "we don't even know what real intelligence is" and that "a computer can never have real intelligence." This creates a logical inconsistency: How can we categorically exclude something whose definition we don't possess? If we cannot define intelligence precisely, we cannot definitively say what does or does not qualify as intelligent.

  3. Learning and Reasoning as Intelligence
    You acknowledge that AI can "learn" and "reason" yet dismiss these as signs of intelligence. Many cognitive scientists and philosophers would argue that learning and reasoning are core components of intelligence. If a system demonstrates these capabilities, the question becomes: What additional criteria must be met for it to count as "truly" intelligent?

  4. Simulation vs. Reality
    The distinction between "simulating" intelligence and "being" intelligent raises a deeper philosophical question: If we can only judge intelligence by observable behavior and outcomes (since we have no direct access to subjective experience), is the distinction meaningful? This relates to the classic Turing Test debate.
    I agree that AI terminology is often misused in media, and that critical thinking about these concepts is essential. However, I think the boundaries are less clear-cut than your post suggests.

Collapse
 
miketalbot profile image
Mike Talbot ⭐

While we cannot currently get LLMs to learn as they go, this is a primary area of current research that would unlock another opportunity for growth. It's very clear from Anthropic's work that LLMs create "concept" features in their weights, which are not about words, but about principles and utilise these in various ways that are far more complex than just looking for the next token. I'd say token prediction is what we all have to do when we write, and language is a primary tool used in reasoning... So I'm not writing LLMs off as "auto-complete".

I agree that the definition of intelligence is difficult, but to claim that something that can reason out a problem (as LLMs have now been shown to do by being tested on problems not expressed in training data) must be some level of "intelligence" by practical definition. Human intelligence? No. AGI compared to humans? No, not yet. Artificial Intelligence? Surely yes. I mean, I think my dog is intelligent. He can find things without me showing him where they are, and he's learned how to ask us for food. I consider him intelligent, but I'm not giving him my job anytime soon :)

There is definitely marketing hype surrounding AI, especially in relation to areas that aren't primary research and development; grand claims are made. But I believe there is something real here, too.

Collapse
 
sagiadinos profile image
Niko Sagiadinos • Edited

I like here the dog analogy, too, but I believe it fundamentally misses the distinction between mimicry and motive.

  • Dog Intelligence is Embodied: Your dog possesses "embodied intelligence." It gathers new, real-world experience, understands causality (if I ask, I get food), and can learn novel behavior outside its training set.
  • LLM is Static Simulation: An LLM is a static statistical model (a highly complex autocomplete) that is designed to simulate the output of human intelligence. You cannot upgrade a simulation concept into an original.

Yes, LLMs display impressive reasoning patterns (like the concept features you mention), but those features are still abstractions of data structures. There is no genuine understanding or new experience.

The key is not to find a level of "intelligence" where LLMs fit, but to recognize that the architecture itself prevents the kind of adaptive, causal learning your dog excels at.

Collapse
 
psypher1 profile image
James 'Dante' Midzi • Edited

FINALLY! THANK YOU SO MUCH! I've had just about enough of this AI nonsense.

Collapse
 
junothreadborne profile image
Juno Threadborne

I agree with most of this, but it also... kinda feels like a semantic argument? Like "Don't call LLMs AI" is like saying "Don't call electric vehicles cars." Yeah, it's way different, but what we call AI isn't the problem. The problem, as you point out, is how we use them. And it feels like almost everyone is using them wrong.

Collapse
 
sagiadinos profile image
Niko Sagiadinos • Edited

I appreciate the analogy, but I disagree that it's just a semantic argument.

The distinction between LLMs and true AI is crucial because it defines the fundamental limitations of the tool. Your car analogy doesn't quite fit:

An EV is a car because it serves the same function (transportation) and obeys the same physical rules (gravity, friction). An LLM is not intelligence; it is a statistical simulation of language produced by intelligence.

The core problem: The current approach (LLM) is fundamentally designed to be a simulation, a highly trained autocomplete machine. You cannot upgrade a simulation concept into an original.

If we want real general intelligence, we need a completely different conceptual and architectural approach

Collapse
 
eddie316 profile image
Eddie

Common sense articulated expertly!!

Collapse
 
sagiadinos profile image
Niko Sagiadinos

Yes, sometimes it is required. 😃

Collapse
 
robvanb profile image
Rob vanBrandenburg

Great article. In my mind, 'AI' is just the next version of the Search engine.

Collapse
 
sagiadinos profile image
Niko Sagiadinos • Edited

Thank you. And, yes, I see it similar. One cool application of LLM is a search machine addon.

Collapse
 
leob profile image
leob

Reality check!