Why Augmented after all?
Why, when talking about AI, do I almost always primarily mean Augmented, not Artificial Intelligence?
What is Augmented Intelligence anyway?
In translation, this would sound like – Augmented, not Artificial intelligence.
It's quite difficult to establish the authors of the term, because the idea began developing long ago, long before the appearance of transformers, roughly around the same time when AI was first talked about as Artificial (late 60s, early 70s).
Augmented Intelligence is talked about in so many ways – both as a system design pattern, and as an approach to "human enhancement," productivity improvement. The word Augmented appears in the context of theoretical LLM training methodologies and much more. Someone, for example, would rather remember AR, and someone might even think about Musk's neuralink.
Therefore, in this post I want to talk about the interpretation from my point of view, why I use this term, and how seriously I take it.
My interpretation lies somewhere based on my experience, and still somewhat gravitates toward a "design pattern," we just need to figure out the design and pattern of what.
TLDR;
Artificial Intelligence – is about autonomous, independent agents that work completely without humans 🤘🏻
Augmented Intelligence – is about a something like symbiosis of human and AI, where each strengthens the other 👤🤝🏻🤖
In reality, the second (augmented) already works much more often and better than the first, is already much more present here than completely independent artificial. And as technology develops, the symbiosis mindset still looks more appealing than the "exploitation" mindset, even, not if – but when, artificial neural networks become even more cognitively powerful!
Why exploitation? Because Artificial from the perspective of human "collective unconscious" is already fully forming as our species' relationship to AI as proprietary. Have you seen all those memes about bullying and threats against poor ChatGPT?
Being the boss – might not be bad, but cultivating a boss mindset, in my opinion – is counterproductive.
Let's return to my experience and move away from science fiction and anthropological speculation.
Mindset matters
I've already started talking about mindset, and this is very important. Unfortunately or fortunately, AI is quite limited, especially as soon as we try to apply it in a narrow, vertical niche.
When communicating with conditional ChatGPT or Claude, many people get the impression that "it's very smart," "it can do almost anything." This impression forms not only among individuals, but also among companies, entrepreneurs who attempt to build new business with LLM or start a new one altogether.
Even quite experienced engineers and managers can find themselves believing that AI systems are something completely magical, quite simple in terms of implementation – "well, it's already smart, right? It'll figure out our mess somehow, and we'll live happily ever after!"
As a result, another misconception follows – AI-related projects don't need to be thought through or modeled at all.
In reality, this turns out to be a path to nowhere. Teams lightning-fast encounter difficulties and spend a long time learning the hard way, feeling out what and how LLM can do, and what it can't, ultimately coming to the fact that people have one language in their heads, the model has a completely different language, we'll still have to engage in ontological modeling of both the system itself and the language in which we communicate with each other and with the AI agent.
It turns out that humans are needed in AI systems much more than expected... and here we smoothly approach Augmented.
Human as part of the system
As long as AI hasn't become a recognized or even a form of life, we continue to collectively exist in human society. Knowing or at least suspecting AI's limitations, and especially knowing about the exceptional potential of this technology, I believe it's correct to talk specifically about Augmented Intelligence as a way to effectively change our reality and ourselves for the better.
Already now, more and more quality AI systems are appearing, but all these surviving systems are either extremely specific and aimed at performing concrete tasks, or complex systems, workflow pipelines, which necessarily have human(s) who at minimum service the system, but often also perform the role of observer and quality controller.
Moreover, these systems themselves are obviously aimed at being used by end users – people, outside the organization as clients, or inside, if we're talking about AI Platform.
I'm getting at the fact that even if we consider a scenario of a "very vertical" agent/AI system – it was still modeled, created by people, and serves people. Whatever AI is, however we consider it – it's already closely linked with our large and small systems.
What about systems that still manage to be "without LLM," and which might even manage to remain so for a long time, due to business specifics – here it still makes sense to talk about Augmented, because the efficiency of people working in such organizations can (and often already does) accelerate thanks to rational use of neural networks in their work.
Two scenarios of AI usage
If we consider from the end user's perspective, AI has now become that very magic box, with which the user can start becoming very stupid (Artificial Intelligence mindset – the user almost completely delegates slow thinking to LLMs, and their cognitive abilities degrade), or develop, learn, get smarter, accelerate their work on routine tasks with the help of LLMs, quickly get feedback, strictly evaluate it – delegate a lot of fast thinking to LLM so it happens even faster, some selected part of slow thinking, but continue to make decisions (intermediate and final, as well as apply mental effort to the task independently (Augmented Intelligence mindset).
What's next?
I think there's nothing wrong with creating more and more AI products, agents to help human agents or other AI agents.
Wherever the evolution of artificial neural networks goes, I'm completely uninterested in speculating about doomsday and other "analytical" nonsense – obviously, we won't stop anymore, and benefits are still manifesting more than harm.
Yes, there seems to be a threat of depriving many people of physical labor, well, maybe it's time to work more with brains and create? Or maybe on the contrary, more jobs will appear, of completely new quality, but which will still have to be adapted to in an Augmented manner?
I don't know, in this blog I'm not going to waste my and your attention on these reflections, there's too much such material, and if you're interested – you'll easily find it.
As I already said – Augmented Intelligence is about cooperation of different agents, strengthening each other. I want to believe that in this "design pattern" we'll start building systems in which we not only better cooperate with LLM, but also as human with human.
Therefore, in my blog I write specifically about designing such "symbiotic" systems - where AI strengthens humans, at least even with the result of work on assigned tasks.
I believe that the conversation about Augmented – is grounding in reality, in understanding systemicity and interconnections. This is a path of thinking toward the conversation itself and efficiency overall. The conversation about Artificial – is too ephemeral, often speculative and entertaining, simply because it happened that way, focusing on artificiality looks like directing attention only toward LLM, whereas in the conversation about Augmented we don't forget about ourselves, and about other agents that will work in our system.
If you're building AI systems and want them to actually work in reality and in production - welcome.
Top comments (0)