DEV Community

Beekey Cheung
Beekey Cheung

Posted on • Originally published at blog.professorbeekums.com on

The Wonder of Human Intelligence

There’s a lot of fear about AI replacing us humans, much of it around jobs specifically. History is filled with examples of jobs being lost to automation in general. While there was some suffering, society overall improved. Would you prefer to live in a world where a human has to operate an elevator for you? How many of you remember the long lines in the cash lane at toll booths? Roads will be safer when drivers no longer get drunk or sleepy.

The advance of automation in general has freed humans to do more interesting work. But what happens if AI starts coming for jobs that aren’t repetitive? What if it starts putting writers, programmers, designers, musicians, sales folk, doctors, physicists, and pretty much everyone else out of a job? Advancements seem to be made rapidly. When AI conquered chess, everyone thought “but Go is so much harder!” It was. Then AI conquered that too. Are we all about to be made obsolete in the coming decades? Is Artificial General Intelligence around the corner?

I’m not a neuroscientist and my specialty isn’t in AI, but I think it’s going to take a lot longer than anyone thinks. The challenge is in creativity.

In “Deep Thinking”, Gary Kasparov tells a story about how Mikhail Tal spent 40 minutes thinking over a single chess move. It ended up being a great move and the papers talked about how he was accurately calculating the move the entire time. In reality, Mikhail was thinking about a hippo being stuck in a marsh and how it could be dragged out. He realized he couldn’t accurately calculate the move and made an intuitive one.

Creativity is required for these sudden bouts of inspiration. The type of inspiration that causes a brilliant doctor to check something no one else thought of. The inspiration that causes a programmer to create a new software pattern. The inspiration causes a musician to try a different note combination.

This creativity usually involves some level of randomness. That’s where AI gets into a bit of trouble. Yes, genetic algorithms and other techniques introduce randomness to simulate discovery. The problem is the nature of that randomness.

A couple weeks ago I had a dream where I was on my way to getting hotpot when I was framed for a crime I didn’t commit. The rest of the dream was me trying to fight my way to dinner. It sounds silly and random, but this dream has some coherent concepts:

  • Trying to get hot pot for dinner
  • Being framed for a crime

Humans think in terms of high level concepts. When you think of a meal, you think of the dishes. You don’t think in terms of the molecules or atoms that your dishes are comprised of.

If you were given a portrait of a person and told to think of some random changes, some things that may come to mind are:

  • Adding a jester’s cap
  • Turning the person into an elephant
  • Adding a creepy ghost behind the person

AI doesn’t think this way. AI works with bits. Instead of adding a jester’s cap or turning the person to an elephant, AI would randomize the pixels in the portrait. Thinking this way isn’t necessarily bad. It’s perfect for certain things like encryption or doing a brute force search on a finite decision tree (essentially how Deep Blue worked). It doesn’t matter that 95% of possible chess moves would be horrendous and wouldn’t even be considered by a GrandMaster. AI can simply analyze all them anyway because it can run brute force calculations way faster than a human.

But this way of thinking is limited to analyzing how things currently are. Given a set of rules we’ve already thought of or data we’ve already generated, AI can go through all the permutations to look for some end state that we’ve already considered.

In a pre-vaccine world, AI would be great at analyzing symptoms, diagnosing the disease, and attempting to mitigate those symptoms. AI would not have have thought of the idea of a vaccine.

In a pre-automobile world, AI would be great at looking at all the inefficiencies in various buggy designs and optimizing horse harnesses to improve performance. AI would not have thought of or created the internal combustion engine.

AI is wonderful in telling us how the world is. It does a poor job of telling us what the world could be.

And yes, AI research does go into high level concepts. But there’s a huge difference in being able to define a concept and understanding what it is. My dream about being framed on my way to dinner is probably useless. Thinking of a hippo stuck in a marsh seemed to have helped Mikhail Tal think of a winning chess move. There’s a fine line between genius and madness. AI is nowhere close to being able to determine which side of that line things should fall.

That leads to the biggest roadblock in creating AI that can fully replace humans. We humans are building AI and even we don’t fully understand how the human mind works. How does the thought of a hippo help with chess? Did my dream actually help me with something else during that day? The usual advice for being stuck on a thought problem is to step away from it and do something else that’s probably unrelated. How does that help? How can we create AI to simulate the human mind when we don’t know all the pieces that make the human mind work.

Maybe I’m wrong. Maybe someone will make an AI breakthrough soon, paving the way for Artificial General Intelligence. Maybe all humans will be obsolete soon.

Or maybe the human mind is a lot more incredible than we give it credit for.

Top comments (1)

Collapse
 
pbeekums profile image
Beekey Cheung

There's a difference between the technical definition of AI and the commonly used definition. Stuff like Deep Blue and Alpha Go is so commonly called AI that a new term, Artificial General Intelligence, had to be made to represent true intelligence.

I'm trying to think of other examples of cases where this has happened, but my mind is blanking (except for maybe whether tomatoes are a fruit or a vegetable).