I appreciate the analogy, but I disagree that it's just a semantic argument.
The distinction between LLMs and true AI is crucial because it defines the fundamental limitations of the tool. Your car analogy doesn't quite fit:
An EV is a car because it serves the same function (transportation) and obeys the same physical rules (gravity, friction). An LLM is not intelligence; it is a statistical simulation of language produced by intelligence.
The core problem: The current approach (LLM) is fundamentally designed to be a simulation, a highly trained autocomplete machine. You cannot upgrade a simulation concept into an original.
If we want real general intelligence, we need a completely different conceptual and architectural approach
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
I appreciate the analogy, but I disagree that it's just a semantic argument.
The distinction between LLMs and true AI is crucial because it defines the fundamental limitations of the tool. Your car analogy doesn't quite fit:
An EV is a car because it serves the same function (transportation) and obeys the same physical rules (gravity, friction). An LLM is not intelligence; it is a statistical simulation of language produced by intelligence.
The core problem: The current approach (LLM) is fundamentally designed to be a simulation, a highly trained autocomplete machine. You cannot upgrade a simulation concept into an original.
If we want real general intelligence, we need a completely different conceptual and architectural approach