DEV Community

Shriya Singh
Shriya Singh

Posted on


You scream! I scream and everyone else screams….not for ice cream, rather for “Artificial Intelligence”.
Don’t look at me like, it wasn’t that lame of a joke! Okay, maybe it was…..a little itty-bitty tiny bit lame. But can you blame me? I just wished to catch the interest of my fellow netizens with notoriously short attention spans of around thirty seconds, don’t believe me, well, you can always refer to the major studies and research conducted by Oxford, King's College London, Harvard and Western Sydney University; they’d tell you the same. But we are straying from the topic! I am here to discuss about Artificial Intelligence or should we say, Intelligent Information Retrieval and Machine Learning; both of these are two very hot, hot buzzwords in the world currently, right up along with their cousin called Mr. Data Science. Every tech geek or wannabe tech geek knows or are familiar with them. But are they really? As you might know, our world is not all sunshine and roses, so there exists many agencies and entities which are committed to providing and spreading falsified information or half-truths in the media and online in general, because doing so is profitable to their businesses and established markets. As a result, noobs and beginners who wish to learn about how JARVIS can talk to Tony Stark, many times fall prey to incorrect and distorted facts about this field. Key word being, “many times”, not all the time, mind you. I’ll try to give as clear cut guide to the world of AI, as possible. Before I get into that, remember, as clever as artificial intelligence may seem, it does ‘not’ duplicate the human brain, learn on its own or operate free from human bias. There is a difference in movies and reality. That is not to say, that there aren’t any remote possibilities, nobody has seen the future after all.

The world of “Artificial Intelligence” and the term itself was originally coined and discovered, because many scientists were curious about this one question, “Can Computers think on their own?”; was it possible to make Computers do things on their own? Any and all other scientific discoveries and inventions followed after this question, as it set fire to the depths of curiosities of enigmatic minds all across the globe.
Artificial Intelligence is a huge hype nowadays, and it’s funny because not a whole lot of people know what it actually is and try to say that what they’ve built is not AI, when in reality it actually is. The formal definition of AI reads itself as, “The effort to automate intellectual tasks normally performed by humans”. That’s a fairly vague definition, what is considered as an ‘intellectual’ task here? Well, back when AI was just considered as a pre-defined set of rules, projects built on it were like AI for a Chess game or maybe a Tic-Tac-Toe game. All these projects had in common were a pre-defined set of rules that humans had made, typed and coded in to the computer system accordingly. The computer would simply follow those coded instructions and execute the set of commands. There was no deep learning, machine learning or crazy algorithms to make things troublesome. It was simply that if you want your computer to do something then you would have to tell it to do that task beforehand; let’s say you are in ‘this’ position and ‘that’ happens, do ‘this’. And that’s what AI was. Very good AI, was just a really good set of rules where there were counter rules for every possible situation, ton of different rules which humans had implemented into a program. You could have AI which could stretch half a million lines of code, just with enormous amounts of rules created for any and all possible outcomes in a simulated/non-simulated environment which could be humanely thought of by the programmers. So, what we take from this is that AI is not necessarily something astoundingly complicated or complex, it can be pretty simple too. But essentially, if you are trying to simulate some intellectual task for example, playing a game that a human would do with a computer, that would be considered AI. So, even a very basic AI for a Tic-tac-toe game where it plays against you, will be considered as AI. And if think of something like the Pacman game, do you remember the ghost guy who tried to find Pacman and kill us? Well, that ghost guy can also be considered AI, what it does is attempt to find the player and simulate how to get there. How it works is by using a basic path-finding algorithm, which has nothing to do with Deep Learning or Machine Learning or anything crazy. But this is still considered Artificial Intelligence, the computer is figuring out how to do something like finding the player in the maze by following an algorithm. So, that again brings me back to point it out once again that AI is not always something unbelievably hard, it just needs to be simulating some form of intellectual human behaviour. This wraps up the basic idea of AI.
Coming here, it is undeniable that today AI has evolved to incorporate much more complex subjects like ML and DL which have become parts of the vast field of AI. Earlier, AI used to be just a set of pre-defined rules. We would feed some data, we would go through the rules and analyse the data to ensure that each and every outcome has already been thought of and it’s counter-measure or the command of ‘what to do’ in that particular scenario has been programmed in the system. So, in the classic example of the Chess game, say we are in ‘Check’, while we pass that board information to the computer, it looks at it’s set of rules and it determines we are in ‘Check’ and then it moves it’s piece somewhere else. Now, what is ‘Machine Learning’ in contrast to that? Well, Machine Learning is in fact the first scientific and technical field of study which actually figures out the rules for us. Rather than us, hard coding the rules into the computer, what Machine Learning attempts to do is to take the data from the user and take what the output should be and figure out the rules for us. We often hear that ML requires a lot of Data and lots of examples and input information in order to train a good model. The reason for that is, the way Machine Learning works is that it generates the “Code of Conduct”, for the users. We give our system some feeding/training data and what the product/resulting data should be, it(our Machine Learning model) looks at that information and works out the ‘in between’. So that, when we have new data then we could generate the best possible output for that. This is also why a lot of the times, ML models do not have a 100% accuracy, which means that they may not have the correct answers to questions, every single time. And our goal is to always create ML models with the highest accuracy possible, that is, the possibility of creating mistakes should be reduced close to none. And in order to accomplish that, it is actually crucial to feed as much sample/training data, both input and output as possible. Because just like a human, our ML models which are trying to simulate human behaviour are prone to making mistakes. To summarize Machine Learning, in essence it is when rather than us, the programmers spoon-feeding the computer system our rules for some specific task, an algorithm is put up that finds the rules for us. Basically, lessening the workload and readying the system to handle even a task which involves seemingly a limitless scope of possibilities which would have been mind numbingly exhausting to code humanely. And we might not know explicitly what the rules are, when we look at or are creating the ML models. But we simply know that we are giving the system some input data and the ‘expected’ output data, it figures out some rules on the basis of the data we’ve just provided, does a few algorithms to get that ‘expected’ outcome, so that later when we don’t have any ‘expected’ output data to train the model, it can generate the outcomes from the algorithm it devised earlier, without any interception on our part.
Now, anyone who gets into an ML course online, wants to know what Neural Networks are, “They can do a lot of cool things!” they say, a lot of fanfare victimizes this particular guy too. For those who haven’t heard of this term before, must be thinking, ‘What are they?”. Well, the easiest way to describe Neural Networks, a son of Machine Learning is that, “It is a form of machine learning that uses s layered representation of data”. A lot of people call it the, “Multiple Stage Information Extraction Process”. But it’s too long for an average joe like me to remember, so let’s stick with “NN” for now. Previously when we discussed AI and ML, we came across terms like “Input Data”, “Output Data” and “Rules” or “Algorithms”. Try and think of these in the form of layers, through “Deep Learning” we can transform our Data by processing it through multiple layers which resemble a network of data. It is a bit hard to describe any further without going in to details. So, let’s stop here. Just think of NN as something which has lots of layers!

Now, if you are thinking what I’m thinking then you must have reached a deep enlightenment similar to mine! Yes! Just now, I realised that one can think of AI as the grandparent, ML as the parent and NN as the child! Funny, isn’t it?

It isn’t?

Oh well, I tried. It’s not my fault, that no one here has any sense of humour!
So, the next time you think something is or isn’t AI, think again. Why? Because I said so!
Signing off. Peace Out, fellow netizens!

Top comments (0)