DEV Community

Hafiz Muhammad Attaullah
Hafiz Muhammad Attaullah

Posted on

Artificial Intelligence - An Overview

Alt Text
Attaullah Shafiq
attaullahshafiq10@gmail.com
https://github.com/attaullahshafiq10

What is AI? Applications and Examples of AI

What is AI?

There is a lot of talk and there's a lot of definitions for what artificial intelligence says. So one of them is about teaching the machines to learn, and act, and think as humans would. Another dimension is really about how do we get the machines to do what we want and impart more of a cognitive capabilities on the machines and sensory capabilities. AI is about analyzing images and videos about national language processing and understanding speech. It's about pattern recognition, and so on, and so forth. Another axis of AI is more around creating a technology that's able to, in some cases, replace what humans do. The most important part of the definition for artificial intelligence is about imparting the ability to think and learn on the machines.

AI is anything that makes machines act more intelligently. We can think of AI as augmented intelligence. We believe that AI should not attempt to replace human experts, but rather extend human capabilities and accomplish tasks that neither humans nor machines could do on their own. The internet has given us access to more information, faster. Distributed computing and IoT have led to massive amounts of data, and social networking has encouraged most of that data to be unstructured. With Augmented Intelligence, we are putting information that subject matter experts need at their fingertips, and backing it with evidence so they can make informed decisions. We want experts to scale their capabilities and let the machines do the time-consuming work.

Based on strength, breadth, and application, AI can be described in different ways. Weak or Narrow AI is AI that is applied to a specific domain. For example, language translators, virtual assistants, self-driving cars, AI-powered web searches, recommendation engines, and intelligent spam filters. Applied AI can perform specific tasks, but not learn new ones, making decisions based on programmed algorithms, and training data. Strong AI or Generalized AI is AI that can interact and operate a wide variety of independent and unrelated tasks. It can learn new tasks to solve new problems, and it does this by teaching itself new strategies. Generalized Intelligence is the combination of many AI strategies that learn from experience and can perform at a human level of intelligence. Super AI or Conscious AI is AI with human-level consciousness, which would require it to be self-aware. Because we are not yet able to adequately define what consciousness is, it is unlikely that we will be able to create a conscious AI in the near future.

AI is the fusion of many fields of study. Computer science and electrical engineering determine how AI is implemented in software and hardware. Mathematics and statistics determine viable models and measure performance. Because AI is modeled on how we believe the brain works, psychology and linguistics play an essential role in understanding how AI might work. And philosophy provides guidance on intelligence and ethical considerations.

Impact of AI: Applications and Examples

AI means different things to different people. For a video game designer, AI means writing the code that affects how bots play, and how the environment reacts to the player. For a screenwriter, AI means a character that acts like a human, with some trope of computer features mixed in. For a data scientist, AI is a way of exploring and classifying data to meet specific goals. AI algorithms that learn by example are the reason we can talk to Watson, Alexa, Siri, Cortana, and Google Assistant, and they can talk back to us. The natural language processing and natural language generation capabilities of AI are not only enabling machines and humans to understand and interact with each other, but are creating new opportunities and new ways of doing business.

Chatbots powered by natural language processing capabilities, are being used in healthcare to question patients and run basic diagnoses like real doctors. In education, they are providing students with easy to learn conversational interfaces and on-demand online tutors. Customer service chatbots are improving customer experience by resolving queries on the spot and freeing up agents time for conversations that add value. AI-powered advances in speech-to-text technology have made real time transcription a reality. Advances in speech synthesis are the reason companies are using AI-powered voice to enhance customer experience, and give their brand its unique voice.

In the field of medicine, it's helping patients with Lou Gehrig's disease, for example, to regain their real voice in place of using a computerized voice. It is due to advances in AI that the field of computer vision has been able to surpass humans in tasks related to detecting and labeling objects. Computer vision is one of the reasons why cars can steer their way on streets and highways and avoid hitting obstacles. Computer vision algorithms detect facial features and images and compare them with databases of face profiles. This is what allows consumer devices to authenticate the identities of their owners through facial recognition, social media apps to detect and tag users, and law enforcement agencies to identify criminals in video feeds. Computer vision algorithms are helping automate tasks. Such as detecting cancerous moles in skin images or finding symptoms in x-ray and MRI scan.

AI is impacting the quality of our lives on a daily basis. There's AI in our Netflix queue, our navigation apps, keeping spam out of our inboxes and reminding us of important events. AI is working behind the scenes monitoring our investments, detecting fraudulent transactions, identifying credit card fraud, and preventing financial crimes.

AI is impacting healthcare in significant ways, by helping doctors arrive at more accurate preliminary diagnoses, reading medical imaging, finding appropriate clinical trials for patients. It is not just influencing patient outcomes But also making operational processes less expensive. AI has the potential to access enormous amounts of information, imitate humans, even specific humans, make life-changing recommendations about health and finances, correlate data that may invade privacy, and much more.

AI Concepts, Terminology, and Application Areas

AI Introduction

Cognitive Computing (Perception, Learning, Reasoning)

AI is at the forefront of a new era of computing, Cognitive Computing. It's a radically new kind of computing, very different from the programmable systems that preceded it, as different as those systems were from the tabulating machines of a century ago. Conventional computing solutions, based on the mathematical principles that emanate from the 1940's, are programmed based on rules and logic intended to derive mathematically precise answers, often following a rigid decision tree approach. But with today's wealth of big data and the need for more complex evidence-based decisions, such a rigid approach often breaks or fails to keep up with available information. Cognitive Computing enables people to create a profoundly new kind of value, finding answers and insights locked away in volumes of data. Whether we consider a doctor diagnosing a patient, a wealth manager advising a client on their retirement portfolio, or even a chef creating a new recipe, they need new approaches to put into context the volume of information they deal with on a daily basis in order to derive value from it. These processes serve to enhance human expertise.

Cognitive Computing mirrors some of the key cognitive elements of human expertise, systems that reason about problems like a human does. When we as humans seek to understand something and to make a decision, we go through four key steps. First, we observe visible phenomena and bodies of evidence. Second, we draw on what we know to interpret what we are seeing to generate hypotheses about what it means. Third, we evaluate which hypotheses are right or wrong. Finally, we decide, choosing the option that seems best and acting accordingly. Just as humans become experts by going through the process of observation, evaluation, and decision-making, cognitive systems use similar processes to reason about the information they read, and they can do this at massive speed and scale. Unlike conventional computing solutions, which can only handle neatly organized structured data such as what is stored in a database, cognitive computing solutions can understand unstructured data, which is 80 percent of data today.

All of the information that is produced primarily by humans for other humans to consume. This includes everything from literature, articles, research reports to blogs, posts, and tweets. While structured data is governed by well-defined fields that contain well-specified information, cognitive systems rely on natural language, which is governed by rules of grammar, context, and culture. It is implicit, ambiguous, complex, and a challenge to process.

Cognitive systems read and interpret text like a person. They do this by breaking down a sentence grammatically, relationally, and structurally, discerning meaning from the semantics of the written material. Cognitive systems understand context. This is very different from simple speech recognition, which is how a computer translates human speech into a set of words. Cognitive systems try to understand the real intent of the users language, and use that understanding to draw inferences through a broad array of linguistic models and algorithms. Cognitive systems learn, adapt, and keep getting smarter. They do this by learning from their interactions with us, and from their own successes and failures, just like humans do.

Machine Learning, Deep Learning, Neural Networks

Terminology and Related Concepts

Let's differentiate some of the closely related terms and concepts of AI: artificial intelligence, machine learning, deep learning, and neural networks. These terms are sometimes used interchangeably, but they do not refer to the same thing.

  • Artificial intelligence is a branch of computer science dealing with a simulation of intelligent behavior. AI systems will typically demonstrate behaviors associated with human intelligence such as planning, learning, reasoning, problem-solving, knowledge representation, perception, motion, and manipulation, and to a lesser extent social intelligence and creativity.

  • Machine learning is a subset of AI that uses computer algorithms to analyze data and make intelligent decisions based on what it has learned, without being explicitly programmed. Machine learning algorithms are trained with large sets of data and they learn from examples. They do not follow rules-based algorithms. Machine learning is what enables machines to solve problems on their own and make accurate predictions using the provided data. Deep learning is a specialized subset of Machine Learning that uses layered neural networks to simulate human decision-making.

  • Deep learning is a specialized subset of Machine Learning that uses layered neural networks to simulate human decision-making. Deep learning algorithms can label and categorize information and identify patterns. It is what enables AI systems to continuously learn on the job, and improve the quality and accuracy of results by determining whether decisions were correct.

  • Artificial neural networks often referred to simply as neural networks take inspiration from biological neural networks, although they work quite a bit differently. A neural network in AI is a collection of small computing units called neurons that take incoming data and learn to make decisions over time. Neural networks are often layered deep and are the reason deep learning algorithms become more efficient as the datasets increase in volume, as opposed to other machine learning algorithms that may plateau as data increases.

Now that you have a broad understanding of the differences between some key AI concepts, there is one more differentiation that is important to understand, that between artificial intelligence and data science. Data science is the process and method for extracting knowledge and insights from large volumes of disparate data. It's an interdisciplinary field involving mathematics, statistical analysis, data visualization, machine learning, and more. It's what makes it possible for us to appropriate information, see patterns, find meaning from large volumes of data, and use it to make decisions that drive business. Data Science can use many of the AI techniques to derive insight from data. For example, it could use machine learning algorithms and even deep learning models to extract meaning and draw inferences from data. There is some intersection between AI and data science, but one is not a subset of the other. Rather, data science is a broad term that encompasses the entire data processing methodology. Well, AI includes everything that allows computers to learn how to solve problems and make intelligent decisions. Both AI and Data Science can involve the use of big data that is significantly large volumes of data.

Machine Learning

Machine Learning, a subset of AI, uses computer algorithms to analyze data and make intelligent decisions based on what it has learned.

Instead of following rules-based algorithms, machine learning builds models to classify and make predictions from data.

Machine Learning is a broad field and we can split it up into three different categories, Supervised Learning, Unsupervised Learning, and Reinforcement Learning. There are many different tasks we can solve with these.

Machine Learning Techniques and Training

We can break down Supervised Learning, by splitting it into three categories, Regression, Classification and Neural Networks. Regression models are built by looking at the relationships between features x and the result y where y is a continuous variable. Essentially, Regression estimates continuous values. Neural Networks refer to structures that imitate the structure of the human brain. Classification on the other hand, focuses on discrete values it identifies. We can assign discrete class labels y based on many input features x.

With Classification, we can extract features from the data. The features in this example would be beats per minute or age. Features are distinctive properties of input patterns that help in determining the output categories or classes of output. Each column is a feature and each row is a data point. Classification is the process of predicting the class of given data points. Our classifier uses some training data to understand how given input variables relate to that class.

What exactly do we mean by training? Training refers to using a learning algorithm to determine and develop the parameters of your model. While there are many algorithms to do this, in layman's terms, if you're training a model to predict whether the heart will fail or not, that is True or False values, you will be showing the algorithm some real-life data labeled True, then showing the algorithm again, some data labeled False, and you will be repeating this process with data having True or False values, that is whether the heart actually failed or not. The algorithm modifies internal values until it has learned to tell from data that indicates heart failure that is True or not, that is False.

With Machine Learning, we typically take a dataset and split it into three sets, Training, Validation and Test sets. The Training subset is the data used to train the algorithm. The Validation subset is used to validate our results and fine-tune the algorithm's parameters. The Testing data is the data the model has never seen before and used to evaluate how good our model is. We can then indicate how good the model is using terms like, accuracy, precision and recall.

Deep Learning

While Machine Learning is a subset of Artificial Intelligence, Deep Learning is a specialized subset of Machine Learning. Deep Learning layers algorithms to create a Neural Network, an artificial replication of the structure and functionality of the brain, enabling AI systems to continuously learn on the job and improve the quality and accuracy of results.

This is what enables these systems to learn from unstructured data such as photos, videos, and audio files. Deep Learning, for example, enables natural language understanding capabilities of AI systems, and allows them to work out the context and intent of what is being conveyed.

Deep learning algorithms do not directly map input to output. Instead, they rely on several layers of processing units. Each layer passes its output to the next layer, which processes it and passes it to the next. The many layers is why itโ€™s called deep learning.

When creating deep learning algorithms, developers and engineers configure the number of layers and the type of functions that connect the outputs of each layer to the inputs of the next. Then they train the model by providing it with lots of annotated examples.

For instance, you give a deep learning algorithm thousands of images and labels that correspond to the content of each image. The algorithm will run those examples through its layered neural network, and adjust the weights of the variables in each layer of the neural network to be able to detect the common patterns that define the images with similar labels.

Neural Networks

An artificial neural network is a collection of smaller units called neurons, which are computing units modeled on the way the human brain processes information. Artificial neural networks borrow some ideas from the biological neural network of the brain, in order to approximate some of its processing results. These units or neurons take incoming data like the biological neural networks and learn to make decisions over time.

A collection of neurons is called a layer, and a layer takes in an input and provides an output. Any neural network will have one input layer and one output layer. It will also have one or more hidden layers which simulate the types of activity that goes on in the human brain. Hidden layers take in a set of weighted inputs and produce an output through an activation function. A neural network having more than one hidden layer is referred to as a deep neural network.

Perceptrons are the simplest and oldest types of neural networks. They are single-layered neural networks consisting of input nodes connected directly to an output node. Input layers forward the input values to the next layer, by means of multiplying by a weight and summing the results. Hidden layers receive input from other nodes and forward their output to other nodes. Hidden and output nodes have a property called bias, which is a special type of weight that applies to a node after the other inputs are considered. Finally, an activation function determines how a node responds to its inputs. The function is run against the sum of the inputs and bias, and then the result is forwarded as an output. Activation functions can take different forms, and choosing them is a critical component to the success of a neural network.

In a convolutional network, this process occurs over a series of layers, each of which conducts a convolution on the output of the previous layer. CNNs are adept at building complex features from less complex ones.

Recurrent neural networks or RNNs, are recurrent because they perform the same task for every element of a sequence, with prior outputs feeding subsequent stage inputs. In a general neural network, an input is processed through a number of layers and an output is produced with an assumption that the two successive inputs are independent of each other, but that may not hold true in certain scenarios. For example, when we need to consider the context in which a word has been spoken, in such scenarios, dependence on previous observations has to be considered to produce the output. RNNs can make use of information in long sequences, each layer of the network representing the observation at a certain time.

AI Application Areas

Some of the most common application areas of AI include natural language processing, speech, and computer vision. Now, let's look at each of these in turn.

Humans have the most advanced method of communication which is known as natural language. While humans can use computers to send voice and text messages to each other, computers do not innately know how to process natural language. Natural language processing is a subset of artificial intelligence that enables computers to understand the meaning of human language. Natural language processing uses machine learning and deep learning algorithms to discern a word semantic meaning. It does this by deconstructing sentences grammatically, relationally, and structurally and understanding the context of use.

Natural language processing is broken down into many subcategories related to audio and visual tasks. For computers to communicate in natural language, they need to be able to convert speech into text, so communication is more natural and easy to process. They also need to be able to convert text-to-speech, so users can interact with computers without the requirement to stare at a screen. The older iterations of speech-to-text technology require programmers to go through tedious process of discovering and codifying the rules of classifying and converting voice samples into text. With neural networks, instead of coding the rules, you provide voice samples and their corresponding text. The neural network finds the common patterns among the pronunciation of words and then learns to map new voice recordings to their corresponding texts.

The flip side of speech-to-text is text-to-speech also known as speech synthesis. In the past, the creation of a voice model required hundreds of hours of coding. Now, with the help of neural networks, synthesizing human voice has become possible. First, a neural network ingests numerous samples of a person's voice until it can tell whether a new voice sample belongs to the same person. Then, a second neural network generates audio data and runs it through the first network to see if it validates it as belonging to the subject. If it does not, the generator corrects its sample and reruns it through the classifier. The two networks repeat the process until they generate samples that sound natural. Companies use AI-powered voice synthesis to enhance customer experience and give their brands their unique voice.

The field of computer vision focuses on replicating parts of the complexity of the human visual system, and enabling computers to identify and process objects in images and videos, in the same way humans do. Computer vision is one of the technologies that enables the digital world to interact with the physical world. The field of computer vision has taken great leaps in recent years and surpasses humans in tasks related to detecting and labeling objects, thanks to advances in deep learning and neural networks. This technology enables self-driving cars to make sense of their surroundings. It plays a vital role in facial recognition applications allowing computers to match images of people's faces to their identities.

AI: Issues, Concerns and Ethical Considerations

AI and Ethics, Jobs, Bias

Issues and Concerns around AI

The number one key challenge is privacy. It's been a challenge getting through to patients, to care providers, to hospital administrators, that their information is safe.
So what we've had to do is prove that we're actually anonymizing the data, we're de-identifying it from the patient themselves and then blending it into the system. So that's something that we've had to educate our users on to kind of get through that barrier. Because it is information that we need. We do need their information in order to provide the better health outcomes.

AI and Ethical Concerns

Ethics is not a technological problem, ethics is a human problem. So that's something that all of us need to care about.

At the World Economic Forum in 2017, IBM CEO Ginni Rometty spoke about the three guiding principles that IBM follows to ensure that the AI and cognitive systems it develops are ethical, trustworthy, and socially responsible.

Purpose:
AI systems developed by IBM are designed to augment human intelligence, not to replace it. IBM refers to these systems as cognitive, rather than AI. Cognitive systems will not become conscious, or gain independent agency, but will always remain under the control of humans. Cognitive systems will be embedded in systems used to enhance human capabilities. Cognitive systems must be built with people in the industry.

"We say cognitive, not AI, because we are augmenting intelligence," Rometty said. "For most of our businesses and companies, it will not be man or machine... it will be a symbiotic relationship. Our purpose is to augment and really be in service of what humans do."

Transparency:
Cognitive systems must be transparent to be fully accepted as a normal part of peopleโ€™s everyday life. Transparency is required to gain public trust and confidence in AI judgments and decisions, so that cognitive systems can be used to their full potential.

For IBM, this has three parts:

  • People must be aware when they come into contact with AI and for what purposes it is used.
  • People must be aware of the major sources of data in use.
  • IBM clients always own their own business models, intellectual property, and data. Cognitive systems augment the clientโ€™s years of industry experience and domain specific knowledge.

IBM recognizes that AI systems must be built with people in the industry. "These systems will be most effective when trained with domain knowledge in an industry context," Rometty said.

Skills:
There are two sides to the AI story; the cognitive systems, and the humans who use them. The human side of the story must also be supported.

AI and Bias

We certainly have to be cognizant with AI systems of things like systematic bias and ethical issues related to AI. We need to be sure that the data that we feed to our AI systems does not have or does not contain bias or that we are able to adjust for that bias, so that we make sure we're not misrepresenting the population as a whole or preferring certain groups over others. I think this is still a very challenging issue that we'll need to work through as these AI systems are developed.

AI working for good

There are many applications of AI that are beneficial to society, helping to protect us from disease, from crime, from hunger, and from ourselves.

In the health field, AI systems are making impacts in controlling the spread of infectious diseases like Dengue fever, yellow fever, and Zika, by predicting outbreaks. The Artificial Intelligence in Medical Epidemiology (Aime) system uses over 270 variables to predict the next Dengue fever outbreak, and has an 88% accuracy rate up to three months in advance. (Aime)

Early detection is crucial in the successful treatment of many cancers, sight loss, and other health problems. AI is having an impact here too. IBM Watson systems are being trained to identify tumors and help diagnose breast, lung, prostrate, and other cancers. (IBM)

Google DeepMind is working with the National Health Service (NHS) in the UK to train AI systems to interpret eye scans. (DeepMind, Vox, Forbes)

Violent crime is a seemingly insoluble issue, but again, AI is having an impact in two major areas: gun violence and knife crime. In the US, the Shotspotter system is being used to detect the sound of gunshots and alert authorities quickly. (Shotspotter)

In the UK, violent knife crime is a rapidly growing problem. Police Forces across the UK are exploring the use of an AI system called National Data Analytics Solution (NDAS). This system focuses on identifying people already known to the police who may be more likely to commit knife crime. The intention is to prioritize getting appropriate help and support for those people, but some people are interpreting this as predicting a crime before it happens, making the plan very contentious. (PublicTechnology.net)

In agriculture, keeping crops healthy and free from disease is a never-ending challenge. In areas at risk of famine, growers must be able to accurately identify multiple crop diseases with similar appearances and different treatments. In Uganda, the Mcrops project combines the use of photographs taken on cheap smartphones and computer vision to help farmers keep their crops healthy. (Mcrops)

Maximizing our efficient use of energy is critical to reducing the cost and impact of generating power. AI systems are being used here too, for managing increasingly complex electricity grids, locating damaged cables, and even helping to reduce the demand that devices make. (The conversation)

Attaullah Shafiq
attaullahshafiq10@gmail.com
https://github.com/attaullahshafiq10

Top comments (0)