DEV Community

Nick Humrich
Nick Humrich

Posted on

AI will replace everyone but you

Two years ago, artificial intelligence jumped ahead in usefulness, thanks to the release of generalized LLMs via ChatGPT. Since then, its powers have grown to allow LLMs to do all sorts of things, from building applications and generating images to even making entire movies.

As is common with any new technological release, there is a crowd of people sowing fear, uncertainty, and doubt. Don't get me wrong, there are times in history where maybe the warning signs should be sounded, but is this one of them? As I have looked more into the claims people seem to be making about AI and replacing jobs, I have started to notice a trend.

See if you can notice that same trend as you read some of these summarized testimonials from people I know. (reworded for comedic effect)

Product managers:

Dude, dev and UX is over. I can vibe-code an entire UX and app without a dev or UX team. But PM's? They are here to stay. AI is so bad at Product Management. Thats the real skill. Maybe devs who learn PM skills will survive.

Frontend engineers:

Dude, UX is over. I can create working mocks using AI, then easily turn that into a working frontend without needing to have anyone design flows for me. But frontend dev is here to stay; the AI generated frontend code is so sloppy. I will be fixing slop created by these UX designers for years.

UX Designers:

Dude, dev is over, especially frontend. I can create mocks in figma, then turn that into working software without a dev. But UX designers? They are here to stay; AI is so bad at knowing good UX design. It just takes random concepts and slings them together, nothing is cohesive.

Backend engineers:

Dude -- pm, UX, and front-end dev is over. I can use AI to come up with product ideas, have it help me design and implement a front-end. But backend code? It's here to stay. AI is so bad at actually implementing good scalable, secure practices. All this AI slop people are generating is going to be creating a lot of problems.

Doctors:

Dude, a lot of people might be out of a job, including lawyers. But doctors? here to stay, this AI gets so many things wrong and makes mistakes at an intern level. AI is really hurting medicine right now. Now let me ask AI about this medical law real quick...

Lawyers:

Dude, AI is going to make everyone out of a job. It can code. It can diagnose medical issues. It can even invent new medications. There really isn't anything AI can't do. Well, except for law. LOL, it's so bad at knowing precedent and writing legal briefs. Lawyers might be the only job in this post-AI boom.

Plumbers:

lol, we aren't going anywhere.

Wait, what!?

What's going on here? It seems like every single person seems to feel comfortable in their job, but in the same breath, thinks everyone else's job is at stake. It's as if every single knowledge worker is saying, "AI will replace everyone. Well, except for me obviously."

Luckily, people have already done all the psychological analysis needed to help explain why this is happening. This effect is created by a mixture of two things:

  1. AI is average at everything. It turns out that most things in your life, you are sub-average at. So when you use a tool that makes you suddenly perform at least average, the end result feels incredible. You are usually completely okay operating at an “average” level for things you don't consider a core competency.
  2. Gell-Mann Amnesia Effect

Gell-Mann Amnesia Effect

The Gall-Mann Amnesia effect is an effect where you discount the stories in the newspaper which you have knowledge about, but then somehow think every other story is credible. It was first explained like this:

Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.

— Michael Crichton, "Why Speculate?" (2002)

Or perhaps, a more brief way to say this is by quoting Erwin Knoll's law of media accuracy:

"Everything you read in the newspapers is absolutely true except for the rare story of which you happen to have firsthand knowledge."

Both of these quotes are trying to say that we can accurately notice that AI is only average when we have first-hand knowledge of the thing AI is doing. We can see the holes, the problems. An area where average is not sufficient for us, AI breaks down. But when we ourselves are below average, we get "amnesia" and somehow think AI must be excellent in that thing.
Now, both of these quotes are talking about "media", but I think it applies to this subject. If for whatever reason, you are still hung-up on that, here is another one for you:

Sokol's paradox

it is more difficult to know what one doesn't know than what one does

What we are getting at here, is that we tend to forget that we ourselves lack knowledge and wisdom in areas where we have not put forth much effort. AI is significantly better at these things than us. The combination of it being really fast, and also more accurate than ourselves, certainly feels magical. But there is a psychological effect that we incorrectly extrapolate from that. Your ability to do a task was transformed from a 2/10 to a 5/10 immediately. "AI did that in only 2 years! Extrapolating, sure, it will get to 10/10 rapidly."

But, when we ourselves are at an 8/10, and we see AI jump to 5/10, we say, "oh, that's nice... but 5->6 is very hard. 6->7, even harder. And 7->8? Well, that's a large jump."

The correct way to extrapolate is to say, "Hmm, AI isn't great at this one thing I am great at. Perhaps its not actually great at anything. Perhaps, I am just terrible at the things I think AI is great at."

Top comments (1)

Collapse
 
larsendaniel profile image
Daniel Larsen

Thank you!