As an AI layman, I am looking for an insider perspective from people who work in the field. Working in industry, I am acutely aware that there is a massive push towards AI development. As a sci-fi fan all my life, I am also a big Dune fan - where thousands of years before the first book takes place, mankind was almost wiped out by AI and barely escaped extinction, to evolve a series of religious and legal frameworks where "thou shalt not make a thinking machine" was meant to avoid that extinction-level event. Also, the voice of Arnold Schwarzenegger keeps ringing in my head: "I'll be back." Followed by a truck smashing through the police station.
So, it seems to me that we are well on our way towards Skynet - AI development where the AI pushes a social or political agenda, and AI becomes more and more capable, more and more connected - those 2 ingredients alone are a recipe for disaster. Imagine AI watching your social media, determining you are posting posts in violation of the leading party's political agenda, and cutting you off from your bank account, locking you out of your house, and reporting your whereabouts to the police. Farfetched? Absolutely not.
Background aside, my questions stem from that - what controls are being put in place to ensure those realities don't come about? I think Elon Musk, who knows a LOT about the matter, warned they are missing. And, if controls are being put in place - what controls? It isn't enough just to look at one aspect, say the chance for AI to start running away with generating code updates for itself. We need to look at every aspect - from sociopolitical, to psychological, to governmental and so on.
What preventions are there from a rogue nation-state (think DPRK) from using a cadre of programmers to generate code for the AI that subverts the "free" world? And, if you say "oh, it's Open Source, that makes it impossible" then I say, "Why? You are saying that there are enough knowledgeable people who have the time and expertise to review all that code prior to submission, and that we know there are no sleeper routines?" If so, I don't believe it. For example, a group proved that Apple's App Store review process was prone to a flaw where malicious code could be broken apart and hidden by the app, and then reassembled at runtime by the app under certain conditions not met during App Store review. (Kind of like the VolksWagon emissions scandal.)
Top comments (0)