DEV Community

Tharun Shiv
Tharun Shiv

Posted on • Updated on

 

AI is a threat! Really?

The intelligence exhibited by anything other than Living beings which is equal to or more than the intelligence of the Living being.

  1. Is AI a threat?
  2. Is AI over-hyped?
  3. Can AI be learned by all?
  4. Does the industry have unrealistic expectations from it?

If you want a one line straight forward answer, then

AI is not a threat at all, at least for another five decades or more.

And yeah, if you are here trying to use fiction movies to justify your answer then, please don’t.

Just to be on point, let us see the stuff that AI has achieved so far,

  1. Due to the availability of good GPU’s, we are able to train the model with huge datasets and hence teach the model to detect suspicious activities. Whether that is in video surveillance, credit card fraud detection, military and many others.

  2. It is also being used to auto verify the documents for bank account creation, loan approvals and others.

  3. What about the chat bots that you have come across?

  4. More accurate appropriation of company budgets : Whether a company is trying to draw up its marketing budget or understanding how much needs to be spent on customer-specific research, AI helps them get a more accurate estimate of their specific cost involvements and resource allocations. This is crucial to operational efficiency and profitability of fin tech players. Some tasks involve a lot of cognitive intelligence and brain work, which leads to certain customers facing processes like documentations and verification more time consuming and prone to errors. But AI automates this process by first learning from the previous data collected and produces unmatchable results.

  5. Movie scripting and Music generation : Yup you heard that right. Didn't expect that, did you? AI could be trained on scripts and stories and then used to generate stories for you. One such famous script is β€˜Sun Spring’, check it out. And yeah it could be used to mimic the works of Beethoven too.

  6. Medicine and Bio-technology : Yes, AI models are used to simulate the effects due to the combination of various chemicals, which in-turn could be used to cure diseases. The feat which would take humans decades could be predicted within minutes with a trained model.

These are just an abstract of few stuff which AI could perform.

Read the Part 2 here:

Posted on https://www.tharunshiv.com/ai-is-a-threat/

Top comments (14)

Collapse
 
leadersheir_ profile image
Warisul Imam • Edited

Most of the people who fear of AI taking over, like Terminator and stuff like that usually have little or no knowledge of machine learning or neural networks.
Us coders usually feel safe cuz we pretty much understand how it works and what drives it. I feel bad for some people who might have nightmares about thisπŸ˜„.

Nice article, btw...

Collapse
 
cheetah100 profile image
Peter Harrison

I've been following artificial intelligence since I was a boy in about 1985. I'm quite well informed about classic AI approaches and the current crop of Deep Learning systems. Prior to 2012 neural networks had fallen out of favour. In fact I don't know of any neural network technology used for real world applications prior to 2012. Deep Learning or deep neural networks along with big data and GPUs convirging have made it possible to build and deploy commercially successful AI.

But it isn't programming, rather it is cleaning and piping data into a predictive neural network which usually classifies. The surprising thing is that this is astonishingly effective in solving problems such as visual recognition that are impossible for programmed solutions.

Look at how fast the barriers fell down and how quickly many applications have been developed to leverage this kind of capability. Further, we got something fudamentally right with neural networks. It wasn't just a faster processor or larger RAM. There was a fundamental shift in capability by using existing technology in a new way that was orthogonal to traditional software development.

But we have not arrived at human level intelligence yet. But how far away is it? Well in 1982 I got my first PC, a ZX81 with 1K of RAM and a clock speed in the Khz range. That was about 40 years ago. Before the Internet was available to me, before hard drives were on home computers. I want to exphasise this difference because if you think that we will achieve less in the next 50 years than we have in the last 40 I have news for you.

The barrier to human level instelligence isn't hardware. There are a few key problems right now, but if solved I would expect human level intellilect to become pervasive almost overnight, just as voice recogntion has in smart phones. Even without such advances AI is already presenting harms. We have already surrendered to facebook and YouTube. Machines already control the information re receive and who we can communicate with. AI is now deeply entrenched in these systems. And it is only going to get smarter.

Sweet dreams.

Collapse
 
developertharun profile image
Tharun Shiv

Great answer, thanks for sharing your knowledge on AI. True.

Collapse
 
leadersheir_ profile image
Warisul Imam • Edited

Hey I didn't mean to be offensive or anything πŸ˜…. I think I owe a correction. We don't "know exactly how it works", but we understand what drives it, at least. I meant like we have to provide it with material from which it has to train from before becoming 'intelligent'.
What I tried to say about non-coder people is that some of them over-estimate and that the word "taking over" doesn't really suite here. But people who build up a neural network, who code the algorithms, know to a pretty good extent as to what their AI can do and what it cannot.
Try checking out this documentary to clarify more about what I'm talking about youtu.be/WXuK6gekU1Y . It's an AI that beat the best human player of a game called "Go". See what the team members have to say when the press becomes concerned about the AI after the human player lost three consecutive games.

Once again, I deeply apologize if I offended anyoneπŸ™πŸ™πŸ™. Hope I clarified the misunderstanding.
Happy Coding!

Collapse
 
ghost profile image
Ghost • Edited

To reduce the possible dangers of AI as a Terminator scenario is, to me, on itself reductive and ill-informed, and no, we don't understand exacly how it works, that's the point. AI purpose is to "make the algorithm" on its own, if we understood exactly how it works we wouldn't need the AI in the first place. We have an idea on how to feed and with what data to train, but the resulting algorithm is not understood, you could reverse-engineering based on the result but is not necesary. (more in my other comment)

Collapse
 
joelbonetr profile image
JoelBonetR πŸ₯‡

Well, if you understand everything about deep learning, neural networks etc, please, explain it to me... or better write a paper, you'll be some kinda famous, as there are some parts of the "AI world" or concept that experts don't know why it works exactly and why it works as it does πŸ˜„.

Collapse
 
ghost profile image
Ghost • Edited

I see a lot of possible threats, Terminator scenario being the least likely, no more than a zombie apocalypse. On the other hand there are a lot of possible negative scenarios, in fact we have seen how automated AI traders have made messes in the trading market, we may lose the video/audio as a reliable evidence, picture misinformation when you can't even rely on video or audio, news are unreliable already, and you can always have some footage checked by some adversarial AI but is an arms race and the one with more resources, i.e. better AI will have the truth.

The AI in science research is also something to keep an eye on, so far we got science trying to understand and engineering using that for practical uses, with AI the thing shifts, we get results having no idea why or how, I can see a future where we have scientist trying to reverse-engineer AI results to understand how the hell it got some result, that could be a huge lose of control in our part as species.

AI do have control over us already, when you use it to make one to increase viewing time, buying decisions, etc. You don't need robots with guns to lose control, that's the childish idea of it. You may train an AI to increase sells of product A and maybe the strategy involve getting the user depressed, we may not even know it, the devs may not even plan it.

AI also give tools unavailable otherwise, to "make decisions" over huge amounts of data that a human couldn't correlate, it will, if not yet, be used to calculate "social scores", threat assessments, economical and health risk of all of us, we'll have no idea how those scores got made, the purpose of AI is to delegate the algorithms to the machine, so we don't have to do it, we could try to understand it afterwards but lets face it, we wont.

An AI may be "instructed" to increase profit and end up doing it by crashing of at least damaging the market, although we humans do a fine job on that front XD

There is a lot of possible negative scenarios, to be an alarmist and assume the Matrix is coming is as foolish and childish as saying there is no problem at all and should discard any apprehension.

And by the way, you don't need strong AI to make a mess, you don't need an AI that "wants" to screw everything to actually do it, the MS one didn't "want" to be a psychopathic racist, just end up like that, imagine that one rating credit scores on the background instead of embarrassing MS on Twitter.

Collapse
 
fjones profile image
FJones • Edited

I think it's quite important to make a distinction here:

On the one hand we have traditional Machine Learning Algorithms, which essentially just provide us with a sophisticated black box of input and output. One that may run on some form of reinforced learning, but is still just a computation. Here, the threats can generally be categorized as unintended output or side-effects, driven by rights and responsibilities of our AI. Without any malicious "intent", given enough responsibility, such an AI may well make human-error scale mistakes, which we already know can cause catastrophes. A MLA-run nuclear plant may pose less risk than a human-run one - or more, in case of an incorrect assessment. A difficult balance. Similarly, a complex computer-generated algorithm may produce poorly-debuggable errors that could cascade. These are the "Oops, the twitter bot looked at all the adult content on the Internet" and "the algorithm decided to reject your loan application" scenarios.

On the other hand, though, we have the Terminator scenario. This is driven by what is still generally assumed to be a target for AI research: Artificial Sentience. Now, we can debate (and have been) for years as to the achievability and ethics therein, but the point to note here is that if it is achieved, sentient AI is no less dangerous than any other sentient being - including humans. We know well enough the tragedies wrought by man, so naturally this is a potential consequence of sentience. What makes it a substantial threat is that we assume - rightly so - that the cognitive capacity of such an AI may exceed that of humanity, and thus mankind would be powerless against it, should we find ourselves on the pointy end of the stick. Is that likely? No, for various reasons. Is it concerning, considering the pace at which even traditional Machine Learning is eclipsing our ability to understand the results? Absolutely.

AI isn't an imminent threat, where we could draw a straight line from image classification to cataclysm, but it is certainly a subject we need to treat with caution.

Collapse
 
blindfish3 profile image
Ben Calder

I'm not overly concerned about the Terminator type scenario; but I don't think it's wrong to raise concerns about AI. There are already weaponised drones in military use. How long before someone decides to run those with AI? Then you can run into Ed-209 type scenarios (See Robocop - 1987) where bad AI leads to unexpected but predictable and undesirable results. Whilst that doesn't threaten the future of mankind it could still lead to people getting killed.

It's naive and dangerous to underestimate just how badly wrong 'AI' can go; especially if you're in the business of working with it. In fact if you work in the field of AI it is your responsibility to be aware of and mitigate known issues - e.g. with inherently biased datasets that lead to racists/mysoginistic outcomes; of which there are already many examples.

Collapse
 
chandrika56 profile image
Jaya chandrika reddy

Movies made us fear AIπŸ˜‚. Though there are chances for it be true, still its pretty far in future. We need to take advantage of it and save many lives, improve eco system and make the world a better place. Thank you Mr.Tharun for pointing it out and summarizing it. Great work!

Collapse
 
developertharun profile image
Tharun Shiv

Yes ma'am, thank you

Collapse
 
parulgu profile image
Parul

AI is the future⭐

Collapse
 
venkat121998 profile image
venkat anirudh

Yes, I dont feel it is much of a threat yet. we can definitely use it to our benefits.

Collapse
 
developertharun profile image
Tharun Shiv

Yes, you're right. Thanks. 😊