DEV Community

Why are you NOT worried about the robot apocalypse?

Ben Halpern on March 09, 2018

The robots are coming and they will enslave us! Or not. Why are you not all that concerned?

Collapse
 
rpalo profile image
Ryan Palo • Edited

Robot slipping on banana peel

Collapse
 
martyhimmel profile image
Martin Himmel

I can't stop watching this. 😂

Collapse
 
jjsantos profile image
Juan De los santos • Edited

The battle has begun... XD

Collapse
 
dmfay profile image
Dian Fay

Because there's no reason to prefer a human labor pool over more robots. The next question of course is whether they'd simply kill us; but while it's easy enough to see a lack of concern for individual humans (witness the ongoing discussions over driverless cars), it's difficult to come up with a compelling reason for a campaign of extermination that doesn't presuppose a capacity for abstract thought.

The most likely scenario as I see it is that automation will continue to put people out of work since robots are better in every way than humans at physical labor, and programs are faster and more reliable than we are at calculations and algorithms. Under the present capitalist paradigm this means more and more un- and underemployment and a concentration of wealth in an ever-smaller group. As of 2016 the richest ten billionaires combined owned half a trillion US dollars. If you ranked that figure on a chart of national GDP for that year, those ten people would land on a position in the low 20s. Imagine what you could do with the labor power represented by $505 billion in a society that put it to work!

The problem with automation isn't the robots.

Collapse
 
andy profile image
Andy Zhao (he/him)

Because there's no reason to prefer a human labor pool over more robots.

Never thought of it that way; great point.

I guess "inequality apocalypse" isn't as catchy huh?

Collapse
 
dmfay profile image
Dian Fay • Edited

The big question to me is whether the development of surveillance, policing, and military technologies (all inextricably related) will outpace the capabilities of a good old-fashioned angry mob with pitchforks and torches. The super-rich have been asking themselves this too: check out this article in The Guardian from a year ago about Silicon Valley royalty buying "just in case" property in New Zealand. They aren't thinking about robots or zombies or a titanic wave of molten metal covering the earth; they're worried about being called to account for a level of wealth that can only be termed obscene in the face of any amount of human suffering or privation.

Collapse
 
yechielk profile image
Yechiel Kalmenson

Because the more intelligent AI becomes, the more it will be plagued by the same bugs that plague the rest of us:

xkcd skynet

Collapse
 
scottishross profile image
Ross Henderson

To be honest, technology works roughly 67% of the time at the moment. So if we continue developing to our current standards I think the Robot Apocalypse would fizzle out after a week or so.

Collapse
 
aurelkurtula profile image
aurel kurtula • Edited

I feel like there are two scenarios

Robots will take our jobs but work for Bill's and Larry's (for the 1%) and the rest of us will suffer.

Kind of how the poor suffer because our politics does not support them. Talk about Basic income and listen to all the objections. If Gates and Page's of the world control the production of these robots then we would be unemployed and blamed for no fault of our own - again similar to how we blame poor people.

Robots will take our jobs but work for us!

There is a beautiful science fiction story "The Lifecycle of Software Objects
" by Ted Chiang (fantastic writer) that amongst other things deals with our moral obligation towards AI. If by robots we mean Ex Machina style, then "robots working for us" is slavery. But, when I think of robots I'm thinking of automated-forklifts and pencil sharpeners on steroids - performing the important jobs of pencil pushers. True "can openers" with no feelings and without all the good stuff that make us us.

These robots would do our jobs and we would reap the benefits!

Someone said something like "if you take jobs away from people you'd be taking what makes them human". It doesn't make sense but of course there are many that might believe their 9-5 give them meaning. I heard one or two philosophers/psychologist say the same thing.

Not to be an ass about it but I do not agree! I think because the 9-5 is what most of us have to do we've come to believe that we need to do it.

Think about it!

If no one had to work ... we'd truly flourish!

As for them enslaving us, It could happen if only the 1% had a say in what kind of robots are created. The 1% of tomorrow might be blinded by their ego and start developing robots for some sort of final solution!

But I truly believe we are becoming better and better human beings and so we'll mostly use this for the good of humanity - with a little bit of mischief on the side, to keep the likes of Trump feel ... important. He will not find a robot to do what he truly wants but he will find robots that can quickly build walls! I can imagine two robots in a loop, one builds and one destroys.


We are the creators in this scenario, and we should really learn from the mistakes of the master! She (why not) created humans and some of them (your's truly) turned atheists. We don't want our creatures have that bug in them :)

Collapse
 
dmfay profile image
Dian Fay

Futurists were predicting shorter and shorter workweeks all through the 20th century. Instead we've kept working the same hours while technology has enabled ever-increasing productivity. Wages (in the US) have basically stagnated since the 1970s, and our legal minimum wage isn't even enough to live on -- companies like Wal-Mart and McDonald's are subsidized by their employees going on welfare just to survive.

We were all supposed to have flying cars and robot butlers by now!

Collapse
 
rhymes profile image
rhymes • Edited

That's because we should have built robots to manage the 1%'s greed :-D

Collapse
 
aurelkurtula profile image
aurel kurtula

That's true.

Take that as my own wishful thinking

Collapse
 
rhymes profile image
rhymes

I think the fact we all think that the first thing AI robots will do is terminate us all says way more about us as a human species than the eventual AI.

Maciej Cegłowski's talk Superintelligence: The Idea That Eats Smart People it's very interesting and point on the topic of Superintelligence and the human intelligence bias.

Collapse
 
pmcgowan profile image
p-mcgowan

I tried to wipe the hair off that "L" about 4 times. More robots would not be a bad thing...

Collapse
 
maxwell_dev profile image
Max Antonucci • Edited

When I die, I at least want it to be in a way that's awesome and inspiring, usually through sheer horror or cool factor. Dying in a robot apocalypse is beaten only be being zombie patient zero or coding a large-scale project with only inline CSS.

Collapse
 
mjb2kmn profile image
MN Mark

We don't understand what consciousness really is, how it works, or why we have it. We couldn't program a computer to do something we don't understand and without self-aware consciousness, they're just a machine.

Collapse
 
erinlmoore profile image
Erin Moore

because it's just a regular apocalypse with extra steps. the simpler ones are much more likely to happen first.

Collapse
 
dmerand profile image
Donald Merand

Maciej Ceglowski presented a very nice counter-argument to rampant AI/robots in his talk Superintelligence: the Idea that Eats Smart People

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

Because I'm a robot.

Collapse
 
ben profile image
Ben Halpern

Collapse
 
ryanlmartin profile image
ryan lee martin

I think that the current pop culture understanding of AI is a gross misunderstanding built on a mythology. A mythology created by science fiction and the marketing of AI that is more than happy to make people believe AI is smarter than it is.

Cognitive thinking in a machine does not exist. We still don't even know where to start. AI has been a complete failure in that way. The kind of AI that has become so pervasive works so well because it is fed massive amounts of data which in spite of its relative size, represents a very narrow and specific subset of reality to accommodate a dumb machine.

I'd be more worried about a future where environmentally aware machines are controlled and weaponized by humans as a tool for enslavement or war. An army of autonomous non-thinking robots don't need to be sentient to inflict mass death or be a tool for enslavement.

Collapse
 
damcosset profile image
Damien Cosset

So, you are telling me that we will be smart enough to make robots that will have the power to enslave the entire human race, but dumb enough to not give those robots the ability to make the right choices?

Collapse
 
i3utm profile image
GnosticMike

One short answer: We will enhance ourselves first before robots can enhance themselves. Whether that be through new skill-sets, new jobs, new ways of earning a living, or new medical advances. However, I cannot wait till the day the robots over take us. That will be a day where we can truly be free of our biological constraints. We are already slaves to our own biology. Why not free us form that type of bondage?

Collapse
 
skatkov profile image
Stanislav(Stas) Katkov

Not concerned.

It's more then 20 year of computer evolution and still a pain to get printer working on Linux. What other operation system robots could use internally? Windows? OSX? I hope your joking. :-)

Collapse
 
nektro profile image
Meghan (she/her) • Edited

youtube.com/watch?v=7Pq-S557XQU
I used to be all aboard the train with CGP Grey that AI and ML meant the end of humanity as we know it.

youtube.com/watch?v=TUmyygCMMGA
Then I saw this great video by Vox that shows that just because it's the end of the world, doesn't mean "it's the end of the world" and that we will eventually grow past it.

For example, robots are really bad at recognizing new patterns and being creative. While we might be able to make an algorithm complex enough to teach itself to drive a car, it wouldn't be able to fly a plane. Yes we have autopilot, but that's because flying is actually the easier of the two tasks. Go figure. Long story short, modern day algorithms continue to amaze us with what they can do just as technology always has and what we should be really worried is "general AI" which is quite a ways off. And Elon Musk is working on that too

Collapse
 
andy profile image
Andy Zhao (he/him)

You probably knew this since you're a pro but we have YouTube liquid tags if you want to use it:

{% youtube TUmyygCMMGA %}
Collapse
 
nektro profile image
Meghan (she/her)

Thanks! I didn't remember the exact tag, but also didn't want to make the comment too big :)

Collapse
 
cathodion profile image
Dustin King • Edited
  1. While machine learning has advanced a lot in the last decade, that's only one part of AI. I haven't heard anything to make me think artificial general intelligence, or AGI, is near.
  2. Corporations are arguably already a form of AI. It's not perfect, but it's no apocalypse.
  3. Robots would more likely want to exterminate us than enslave us.
  4. Enslavement by robots might not be all that bad, as manual labor and suffering are not likely to be their end goals.
  5. AI is created for human goals, by humans. As long as we can keep this from creating more poverty, I think we'll be fine.
  6. In a sense we are artificial intelligence. We're a neural net generated by a genetic algorithm. In the process of re-creating that, we'll learn a lot about ourselves and get to witness the birth either of an exciting new life form, or of the next stage of our own evolution. So that excitement outweighs the fear that it will destroy us, even though it might.
Collapse
 
moe64 profile image
Moe

I'm not prepared with robots taking my job!
Prob of automation infograhic

Collapse
 
ben profile image
Ben Halpern

When people say that a computer will take a computer programmer's job, they must be talking about a computer programmer's current job. I've always found that to be a bit silly because I hope that a lot of my job could be automated and I'll keep evolving what I can do without having to worry about those parts.

Collapse
 
_patrickgod profile image
Patrick God

Well said! I can't really imagine that the job of a programmer will ever go extinct. There are definitely tasks that will be done by AI, but just like the AI we as developers also have to evolve - and do different tasks in the future. It's all about lifelong learning.

Collapse
 
wiz profile image
wiz • Edited

Okay so we see human killing human and we are designing humanoids - human-like robots.xD

No matter we worry or not ...AI will trend.
Also, I think I am always responsible for the bugs and issues in my programs.
So, in case of apocalypse, this will be us for our shitty programming skills and I don't know why people forget that if we homo sapiens can design the robos, provide them cognitive abilities then why won't we have power to shut them up. blow them away, turning them back to the metal crap.
You know we program them not they. So don't worry unless u can program.

Collapse
 
lexlohr profile image
Alex Lohr

We have killed our own kind for centuries now and got real good at it.

I'm not afraid of robots or AI, because all my fear is already taken by human monsters.

Collapse
 
alainvanhout profile image
Alain Van Hout

Robot apocalypse? Are we automating that too?

Humans are getting really lazy ...

Collapse
 
johnpaulada profile image
John Paul Ada

We're all gonna die anyway?

Collapse
 
lovis profile image
Lovis • Edited

Robots aren't real! Like, you only ever see them on the internet or on television. My mom told me television isn't real.
So robots aren't real either.▪️

Collapse
 
mshel profile image
MikhailShel • Edited

I think even before robots, aliens will show up and turn us into bio batteries :D