DEV Community

Cover image for The Untold issues with AI job-takeover theory ( chapter 2)
Tiago Nobrega
Tiago Nobrega

Posted on

The Untold issues with AI job-takeover theory ( chapter 2)

TLDR: The fact that AI will improve does not automatically mean it will replace most workers. Progress matters, but so do limits, timelines, economics, and diminishing returns.

The Theory
There's an ongoing theory that AI will take most jobs, and definitely all software development jobs. I think this is highly exaggerated for a few reasons. In this post, I want to look at how AI evolves, where it might take us, and what that means.

The Worst AI You'll Ever Use

Today's AI is the worst AI you'll ever use. I bet you've heard something like this before. If you've been interacting with LLMs over the past year, this is obvious. So, what does this mean? If we think about it, this statement is true about almost any piece of technology, and not just computer science. We have the worst cars, planes, boats, TVs, medicines, gym equipment we'll ever have. Maybe not zippers, these are roughly the same since invention (crazy, I know).

But none of these things have the capability of taking our jobs, you might argue. Ok, but the point is that just saying something is getting better, doesn't tell us anything about where this will take us and how long it will take to get there.

AGI? ASI? WTFAI?

There's a logical leap embedded in that statement. Meaning just because AI is getting better, doesn't mean it will evolve to replace anyone. The question is: "How far can it go?".

There's a big discussion about when AI will reach AGI (Artificial General Intelligence) or ASI (Artificial Super Intelligence). On X (Twitter) we are reaching AGI about 5 times a day at this point. But the real question is not when, but if we'll get there.

The first big issue with reaching AGI is defining AGI. There's no metric for human intelligence. Life is not an RPG game. No, IQ is not a measure of intelligence, it's a measure of cognitive abilities. Animal intelligence is usually measured in comparison to human abilities at a given age. Very subjective. ASI is an even broader definition.

So, there's no cinematic moment where Sam Altman sees something on a screen, looks at the camera and says: "We've done it! AGI is born!" or some other cringe sentence.

Regardless, even if we define AGI as roughly matching the cognitive abilities of an average human, that still raises another question: what if we can actually build it? No one knows if it's possible to achieve that using LLMs or Machine Learning. What if we can?

I want you to do an exercise. Think about everyone you know and how smart (or not) they are. This AGI will, by definition, be dumber than half of people you know in most tasks. Not very impressive when thinking about it in these terms.

It all happened so fast... too fast!

A lot of conjecture about where we are going. No one knows. But one thing is certain. It's moving fast, right? Right?

I won't deny the improvements made in the past 5 years (or even in the last year) were very, very impressive. It feels like models are getting better by the minute, at least most of the time.

The thing is though, just like a celebrity post face lift, AI looks a lot younger than it really is. The real birth of AI is hard to pinpoint. You could go back to Alan Turing's "Imitation Game", but it's a bit unfair. Considering the current hype started with gpt-3 release, I think it's fair to go back to the first chatbot. Or should I say chattering bot? ELIZA The first "chattering bot" was created in 1964. So, why now?

The speed in AI recent advancements can be attributed to a majority of factors. A big one is "Attention Is All You Need" paper. This unlocked massive parallelism in AI computing, which allowed the use of GPUs for computing. When you align this with the GPU hardware advancements made by companies like NVidia, you get a perfect storm (and a bunch of sad gamers).

This leap attracted investors and fueled many of the predictions we hear today. This lead to massive investments in the AI industry. I mean MASSIVE! Great accomplishments need massive investments. This is normal, but...

When we look to other milestones in science, we can spot massive investments. It's estimated that the Apollo mission and the Manhattan project spent about 0.4% of the US GDP of the time. In today's terms ($27T GPD), this would equate to $108B. Meta alone, plans to spend 135B in AI in 2026, and it's not even the leading company in the industry. AI investments dwarf historical mega projects in the history. It's no longer normal, it's is insane:

So when we consider the time and money invested in the AI development. Is it something that moves too fast? There's this idea in economics called "law of diminishing return". In layman's terms (the best I can provide), this is the idea that it gets harder and harder to improve quality of a process. Or, each additional unit of investment or improvement leads to smaller and smaller gains. It relates to the idea of the "Pareto Principle"(aka: 80/20 rule).

Think about when was the last time we got really excited about a new iPhone release. When was the last time you felt the urge to buy a new TV model? The improvements from year to year don't justify any hype.

If we were to predict AI is going to continue to improve in the same pace, we would need to assume either or both:

  • This massive level of investment will to continue or grow in the same pace.
  • A major breakthrough will happen in the field.

This is not me arguing that this will not unfold. I just want to highlight the challenges ahead in a realistic perspective. It's not likely we'll see AI making the same leap forward in the next 5 year that it did in the past 5 years.

State Of The Art

You can't think in "forever" terms. "It will just keep getting better". Whenever you start to consider infinity, everything is possible. Even monkeys writing Shakespeare. Regarding job displacement, we need to think decades, not centuries. What does the state of the art AI look like in 10 years?
Right now, AI is really good at pattern matching and information gathering/lookup. It's a big smart database, it can be dangerous, and it is causing pain right now. We can discuss this, but to say it will end most of the jobs? Extremely pessimistic (or optimist, depending on your side). If you job is mostly pattern recognition and information gathering, you probably have been impacted already. And if that's the case, is not the end of the world. It happened before to many people in large scales. But that's a topic for another time.

AI still needs to improve a lot in other areas of cognition like logic, creativity, visual/spatial. Not to mention the lack of reliability, which is inherited to it's architecture. So what do you think ? How far can it get in your lifetime ?

Keep Coding. Until next time.

Top comments (0)