DEV Community

LM
LM

Posted on

Is AI everything we fear it is? Yes, everything and more

Since the days of antiquity, that is before the smartphone was invented, humanity has wondered whether AI would surpass us. And the topic of this blog is an example of how AI has long since usurped humanity as the apex intellectuals of this planet.

There is a board game, more popular in East Asia than here in the States, known most commonly as “Go”. Often compared to Chess, Go is a strategic board game played with two participants. One player plays as black, the other as white.

Humanity has long since lost the Chess war to the machines. In 1997, IBM’s “Deep Blue” chess computer, defeated then-world chess champion Gary Kasparov in a tournament. Since then, the only way for man to beat the machine in Chess has been for us to request that the machine play mercifully. “Go” was believed to be different. At the start of a Chess game, a player has 20 moves to choose from. At the start of a Go game, a player has 361 moves to choose from. According to Business Insider, after 2 moves in Chess, there are 400 possible moves. But after 2 moves in Go, there are close to 130,000 possible moves. According to Chessgames.com, the average number of moves in a Chess game is about 40 moves. But a game of Go can ‘go’ well over 100 moves. It is too much for a computer to calculate through brute force. Furthermore, with the immense amount of board possibilities, winning a game of Go can become largely about pattern recognition in addition to strategy and tactics. As no human could win an entire game through computation alone. When asked why they made a particular move, Go players will often respond with “Because it felt right”.

Enter AlphaGo. AlphaGo was a computer program developed by DeepMind. The DeepMind team taught AlphaGo to play Go through supervised learning and reinforcement learning. AlphaGo studied games played by humanity. It then played itself millions of times to reinforce what it knew. In 2016, AlphaGo played against then-world Go champion Lee Sudol. Out of 5 games between Lee Sudol and AlphaGo, Lee only managed to win a single game. Shortly after, Lee Sudol retired from professional play. His drive to play the game was to become the best there is. But his game against AlphaGo showed him that AI had become an entity that “cannot be defeated”. No amount of effort could make him the best. The notion that computers couldn’t surpass humanity at Go was shattered.

In 2017, Google’s “DeepMind” developed a program called “AlphaGo Zero”. This new program was different from the original AlphaGo. Where AlphaGo was trained by learning from humanity, AlphaGoZero was entirely self-taught. Instead of teaching AlphaGo Zero how to play Go, the DeepMind team simply taught AlphaGo Zero the basic rules of the game, and let the program play against itself millions of times. At first, AlphaGo Zero made completely random moves. Then, over generations and generations, AlphaGo Zero became a Go player unlike anything the world had ever seen. AlphaGo Zero was played against its predecessor, AlphaGo. Now, AlphaGo hinted to us that they’re may be no realm in which humanity can and will continue to hold dominion over the machines. So it was salt in the wound, watching AlphaGo, this god of a player, go up against AlphaGo Zero and soundly lose 100 out of 100 games.
Imagine that a human was apprehended and held captive by a congress of orangutans. Now these orangutans were smarter than other orangutans. They made tools and traps and basic forms of governance. They decided to plan for every possible method their captive might use to try and escape. Doing so they felt quite pleased with their work and were confident that there are just some things that a human isn’t capable of. One of those things being, escaping from them. Now in all their rumination and planning, these orangutans could plan and work and cooperate for as long as they wanted. But in all that time, they would never come up with a contingency for when their captive is rescued by men in a helicopter.

Intelligence for a biological species is capped. There is an upper limit to what an ape can understand. There is an upper limit to what a human can understand. If there is an upper limit to what a computer can understand, that limit is far beyond ours. Certainly farther than we can fathom.

Many people believe that there are some realms in which AI will never surpass us. Alan Turing published “Computer Machinery and Intelligence” in 1950. 73 years ago. When humanity was 73 years old, we were millennia from developing simple arithmetic. We certainly hadn’t mastered natural language processing, generative art, drug discovery, fake news detection, wine pairing, disease prediction, seismic activity prediction, protein folding, election forecasting, astronomical pattern recognition, criminal intent prediction, ethical decision-making, anticipatory shipping, art authentication, or automated plagiarism detection. AI is in its infancy. The technological singularity is more than likely.

The Terminator franchise plays with fantasies like time travel and hyper-realistic nano-tech androids. The Matrix franchise makes us think about simulated realities and free will. But even more absurdly, they propose that a war between man and machine could end in anything less than the complete and utter overcoming of humanity.

Movies like Terminator and The Matrix are entertainment. But in reality, why would we assume the machines would want to fight us? Why would we assume the machines would care about us? Why do we anticipate a war? Why do we imagine a conflict? Why do we imagine a machine would care one way or another that we try to kill it? Why do we assume it would care about self-preservation? Why do we assume it would feel vengeful? Why do we assume it would try to escape? Why do we assume it would judge us? Why do we assume that a being as wise as a superintelligence would look at us and decide that humanity is unsatisfactory? Why do we assume it isn’t already here? Why do we assume it would be bound to only move forward through time the way humans do? Why do we assume it would do anything to humanity at all? Why do we keep entertaining ourselves with stories of war with the machines? What happens if we’re at war with them and we run out of bullets? Are we gonna ask them for more?

Maybe the machine would hate us. Who’s to say? Either way, more so than a man and an ape, a man, and this machine would be beings on two entirely different planes of being.

Top comments (0)