<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Prazwal Ks</title>
    <description>The latest articles on DEV Community by Prazwal Ks (@prazwal).</description>
    <link>https://dev.to/prazwal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/prazwal"/>
    <language>en</language>
    <item>
      <title>RealTime Detection</title>
      <dc:creator>Prazwal Ks</dc:creator>
      <pubDate>Fri, 06 May 2022 16:19:34 +0000</pubDate>
      <link>https://dev.to/prazwal/realtime-detection-45bk</link>
      <guid>https://dev.to/prazwal/realtime-detection-45bk</guid>
      <description>&lt;p&gt;Real-time detection uses streaming data by comparing previously seen data points to the last data point to determine if your latest one is an anomaly. This operation generates a model using the data points you send, and determines if the target (current) point is an anomaly. By calling the service with each new data point you generate, you can monitor your data as it's created.&lt;br&gt;
&lt;u&gt;&lt;br&gt;
Real-time detection example&lt;/u&gt;&lt;br&gt;
Consider a scenario in the carbonated beverage industry where real-time anomaly detection may be useful. The carbon dioxide added to soft drinks during the bottling or canning process needs to stay in a specific temperature range.&lt;/p&gt;

&lt;p&gt;Bottling systems use a device known as a carbo-cooler to achieve the refrigeration of the product for this process. If the temperature goes too low, the product will freeze in the carbo-cooler. If the temperature is too warm, the carbon dioxide will not adhere properly. Either situation results in a product batch that cannot be sold to customers.&lt;/p&gt;

&lt;p&gt;This carbonated beverage scenario is an example of where you could use streaming detection for real-time decision making. It could be tied into an application that controls the bottling line equipment. You may use it to feed displays that depict the system temperatures for the quality control station. A service technician may also use it to identify equipment failure potential and servicing needs.&lt;/p&gt;

&lt;p&gt;You can use the Anomaly Detector service to create a monitoring application configured with the above criteria to perform real-time temperature monitoring. You can perform anomaly detection using both streaming and batch detection techniques. Streaming detection is most useful for monitoring critical storage requirements that must be acted on immediately. Sensors will monitor the temperature inside the compartment and send these readings to your application or an event hub on Azure. Anomaly Detector will evaluate the streaming data points and determine if a point is an anomaly.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Text Analytics</title>
      <dc:creator>Prazwal Ks</dc:creator>
      <pubDate>Wed, 20 Apr 2022 16:27:38 +0000</pubDate>
      <link>https://dev.to/prazwal/text-analytics-5818</link>
      <guid>https://dev.to/prazwal/text-analytics-5818</guid>
      <description>&lt;p&gt;&lt;strong&gt;&lt;u&gt;Text Analytics Techniques&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
Text analytics is a process where an artificial intelligence (AI) algorithm, running on a computer, evaluates these same attributes in text, to determine specific insights. A person will typically rely on their own experiences and knowledge to achieve the insights. A computer must be provided with similar knowledge to be able to perform the task. There are some commonly used techniques that can be used to build software to analyze text, including:&lt;/p&gt;

&lt;p&gt;Statistical analysis of terms used in the text. For example, removing common "stop words" (words like "the" or "a", which reveal little semantic information about the text), and performing frequency analysis of the remaining words (counting how often each word appears) can provide clues about the main subject of the text.&lt;br&gt;
Extending frequency analysis to multi-term phrases, commonly known as N-grams (a two-word phrase is a bi-gram, a three-word phrase is a tri-gram, and so on).&lt;br&gt;
Applying stemming or lemmatization algorithms to normalize words before counting them - for example, so that words like "power", "powered", and "powerful" are interpreted as being the same word.&lt;br&gt;
Applying linguistic structure rules to analyze sentences - for example, breaking down sentences into tree-like structures such as a noun phrase, which itself contains nouns, verbs, adjectives, and so on.&lt;br&gt;
Encoding words or terms as numeric features that can be used to train a machine learning model. For example, to classify a text document based on the terms it contains. This technique is often used to perform sentiment analysis, in which a document is classified as positive or negative.&lt;br&gt;
Creating vectorized models that capture semantic relationships between words by assigning them to locations in n-dimensional space. This modeling technique might, for example, assign values to the words "flower" and "plant" that locate them close to one another, while "skateboard" might be given a value that positions it much further away.&lt;br&gt;
While these techniques can be used to great effect, programming them can be complex. In Microsoft Azure, the Language cognitive service can help simplify application development by using pre-trained models that can:&lt;/p&gt;

&lt;p&gt;Determine the language of a document or text (for example, French or English).&lt;br&gt;
Perform sentiment analysis on text to determine a positive or negative sentiment.&lt;br&gt;
Extract key phrases from text that might indicate its main talking points.&lt;br&gt;
Identify and categorize entities in the text. Entities can be people, places, organizations, or even everyday items such as dates, times, quantities, and so on.&lt;br&gt;
In this module, you'll explore some of these capabilities and gain an understanding of how you might apply them to applications such as:&lt;/p&gt;

&lt;p&gt;A social media feed analyzer to detect sentiment around a political campaign or a product in market.&lt;br&gt;
A document search application that extracts key phrases to help summarize the main subject matter of documents in a catalog.&lt;br&gt;
A tool to extract brand information or company names from documents or other text for identification purposes.&lt;br&gt;
These examples are just a small sample of the many areas that the Language service can help with text analytics.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Regression</title>
      <dc:creator>Prazwal Ks</dc:creator>
      <pubDate>Tue, 19 Apr 2022 15:01:08 +0000</pubDate>
      <link>https://dev.to/prazwal/regression-fnb</link>
      <guid>https://dev.to/prazwal/regression-fnb</guid>
      <description>&lt;p&gt;Regression is a form of machine learning that is used to predict a numeric label based on an item's features. For example, an automobile sales company might use the characteristics of a car (such as engine size, number of seats, mileage, and so on) to predict its likely selling price. In this case, the characteristics of the car are the features, and the selling price is the label.&lt;/p&gt;

&lt;p&gt;Regression is an example of a supervised machine learning technique in which you train a model using data that includes both the features and known values for the label, so that the model learns to fit the feature combinations to the label. Then, after training has been completed, you can use the trained model to predict labels for new items for which the label is unknown.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>DataBricks,Spark Summary</title>
      <dc:creator>Prazwal Ks</dc:creator>
      <pubDate>Thu, 14 Apr 2022 14:32:59 +0000</pubDate>
      <link>https://dev.to/prazwal/databricksspark-summary-4fd2</link>
      <guid>https://dev.to/prazwal/databricksspark-summary-4fd2</guid>
      <description>&lt;p&gt;Databricks was founded by the creators of Apache Spark, Delta Lake, and MLflow.&lt;/p&gt;

&lt;p&gt;Over 2000 global companies use the Databricks platform across big data &amp;amp; machine learning lifecycle.&lt;/p&gt;

&lt;p&gt;Databricks Vision is to  Accelerate innovation by unifying data science, data engineering, and business.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Databricks offers&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
1.Databricks Workspace - Interactive Data Science &amp;amp; Collaboration&lt;br&gt;
2.Databricks Workflows - Production Jobs &amp;amp; Workflow Automation&lt;br&gt;
Databricks Runtime&lt;br&gt;
3.Databricks I/O (DBIO) - Optimized Data Access Layer&lt;br&gt;
4.Databricks Serverless - Fully Managed Auto-Tuning Platform&lt;br&gt;
5.Databricks Enterprise Security (DBES) - End-To-End Security &amp;amp; Compliance&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Apache Spark&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
Spark is a unified processing engine that can analyze big data using SQL, machine learning, graph processing, or real-time stream analysis&lt;/p&gt;

&lt;p&gt;&lt;u&gt;&lt;strong&gt;Spark Engine&lt;/strong&gt;&lt;/u&gt;&lt;br&gt;
At its core is the Spark Engine.&lt;br&gt;
The DataFrames API provides an abstraction above Resilient Distributed Datasets (RDDs) while simultaneously improving performance 5-20x over traditional RDDs with its Catalyst Optimizer.&lt;br&gt;
Spark ML provides high quality and finely tuned machine learning algorithms for processing big data.&lt;br&gt;
The Graph processing API gives us an easily approachable API for modeling pairwise relationships between people, objects, or nodes in a network.&lt;br&gt;
The Streaming APIs give us End-to-End Fault Tolerance, with Exactly-Once semantics, and the possibility for sub-millisecond latency.&lt;br&gt;
And it all works together seamlessly!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Azure Data Bricks</title>
      <dc:creator>Prazwal Ks</dc:creator>
      <pubDate>Thu, 14 Apr 2022 14:26:36 +0000</pubDate>
      <link>https://dev.to/prazwal/azure-data-bricks-h06</link>
      <guid>https://dev.to/prazwal/azure-data-bricks-h06</guid>
      <description>&lt;p&gt;Azure Databricks is a fully managed, cloud-based Big Data and Machine Learning platform, which empowers developers to accelerate AI and innovation by simplifying the process of building enterprise-grade production data applications. Built as a joint effort by the team that started Apache Spark and Microsoft, Azure Databricks provides data science and engineering teams with a single platform for Big Data processing and Machine Learning.&lt;/p&gt;

&lt;p&gt;By combining the power of Databricks, an end-to-end, managed Apache Spark platform optimized for the cloud, with the enterprise scale and security of Microsoft's Azure platform, Azure Databricks makes it simple to run large-scale Spark workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Optimized environment&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
To address the problems seen on other Big Data platforms, Azure Databricks was optimized from the ground up, with a focus on performance and cost-efficiency in the cloud. The Databricks Runtime adds several key capabilities to Apache Spark workloads that can increase performance and reduce costs by as much as 10-100x when running on Azure, including:&lt;/p&gt;

&lt;p&gt;High-speed connectors to Azure storage services, such as Azure Blob Store and Azure Data Lake&lt;br&gt;
Auto-scaling and auto-termination of Spark clusters to minimize costs&lt;br&gt;
Caching&lt;br&gt;
Indexing&lt;br&gt;
Advanced query optimization&lt;br&gt;
By providing an optimized, easy to provision and configure environment, Azure Databricks gives developers a performant, cost-effective platform that enables them to spend more time building applications, and less time focused on managing clusters and infrastructure.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Reading data from images</title>
      <dc:creator>Prazwal Ks</dc:creator>
      <pubDate>Wed, 13 Apr 2022 02:20:57 +0000</pubDate>
      <link>https://dev.to/prazwal/reading-data-from-images-108g</link>
      <guid>https://dev.to/prazwal/reading-data-from-images-108g</guid>
      <description>&lt;p&gt;Suppose you are given thousands of images and asked to transfer the text on the images to a computer database. The scanned images have text organized in different formats and contain multiple languages. What are some ways you could complete the project in a reasonable time frame and make sure the data is entered with a high degree of accuracy?&lt;/p&gt;

&lt;p&gt;Companies around the world are tackling similar scenarios every day. Without AI services, it would be challenging to complete the project, especially if it were to change in scale.&lt;/p&gt;

&lt;p&gt;Using AI services, we can treat this project as a computer vision scenario and apply Optical Character Recognition (OCR). OCR allows you to extract text from images, such as photos of street signs and products, as well as from documents—invoices, bills, financial reports, articles, and more.&lt;/p&gt;

&lt;p&gt;To build an automated AI solution, you need to train machine learning models to cover many use cases. Azure's Computer Vision service is a Cognitive Service that gives access to advanced algorithms for processing images and returns data to secure storage.&lt;/p&gt;

&lt;p&gt;The Computer Vision service offers two APIs that you can use to read text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;OCR API:&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
Use this API to read small to medium volumes of text from images.&lt;br&gt;
The API can read text in multiple languages.&lt;br&gt;
Results are returned immediately from a single function call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Read API:&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
Use this API to read small to large volumes of text from images and PDF documents.&lt;br&gt;
This API uses a newer model than the OCR API, resulting in greater accuracy.&lt;br&gt;
The Read API can read printed text in multiple languages, and handwritten text in English.&lt;/p&gt;

&lt;p&gt;The initial function call returns an asynchronous operation ID, which must be used in a subsequent call to retrieve the results.&lt;br&gt;
The computer vision service offers the OCR API and Read API.&lt;/p&gt;

&lt;p&gt;You can access both technologies via the REST API or a client library. In the next few units, we'll show you how to call the REST API and return a JSON response. Then for the exercise, you'll use a client library to return objects that abstract the JSON response.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>ML Model</title>
      <dc:creator>Prazwal Ks</dc:creator>
      <pubDate>Fri, 08 Apr 2022 17:37:45 +0000</pubDate>
      <link>https://dev.to/prazwal/ml-model-npl</link>
      <guid>https://dev.to/prazwal/ml-model-npl</guid>
      <description>&lt;p&gt;The model is the core component of machine learning, and ultimately what we are trying to build. A model might estimate how old a person is from a photo, predict what you might like to see on social media, or decide where a robotic arm should move to. In our scenario, we want to build a model that can estimate the best boot size for a dog based on their harness size.&lt;/p&gt;

&lt;p&gt;Models can be built in lots of ways. For example, a traditional model that simulates how an airplane flies is built by people, using knowledge of physics and engineering. Machine learning models are special—rather than being edited by people so that they work well, machine learning models are shaped by data they learn from experience.&lt;/p&gt;

&lt;p&gt;How to think about models&lt;br&gt;
A model can be thought of as a function that accepts data as an input and produces an output. More specifically, a model uses input data to estimate something else. For example, in our scenario, we want to build a model that is given a harness size, and estimates boot size&lt;/p&gt;

&lt;p&gt;Note that harness size and dog boot size are data, they are not part of the model. Harness size is our input, Dog boot size is the output.&lt;/p&gt;

&lt;p&gt;Models are often simple code&lt;br&gt;
Models are often not meaningfully different from simple functions you are already familiar with. Like other code, they contain logic and parameters. For example, the logic might be “multiply the harness size by parameter_1”.&lt;/p&gt;

&lt;p&gt;If parameter_1 here was 2.5, our model would multiply harness size by 2.5 and return the result:&lt;/p&gt;

&lt;p&gt;Selecting a model&lt;br&gt;
There are many model types, some simple and some complex.&lt;/p&gt;

&lt;p&gt;Like all code, simpler models are often the most reliable and easy to understand, whilst complex models can potentially perform impressive feats. Which kind of model should be chosen depends on your goal. For example, medical scientists often work with models that are relatively simple because they are reliable and intuitive. By contrast, AI-based robots typically rely on very complex models.&lt;/p&gt;

&lt;p&gt;The first step in machine learning is selecting the kind of model that you would like to use. This means we are choosing a model based on its internal logic. For example, we might select a two-parameter model to estimate dog boot size from harness size.&lt;/p&gt;

&lt;p&gt;Notice how we have selected a model based on how it works logically, but not based on its parameter values. In fact, at this point the parameters have not yet been set to any particular value.&lt;/p&gt;

&lt;p&gt;Parameters are discovered during training&lt;br&gt;
Parameter values are not selected by the human designer. Instead, parameter values are set to an initial guess, and then adjusted during an automated learning process called training.&lt;/p&gt;

&lt;p&gt;Given our selection of a two-parameter model (above), we now provide random guesses for our parameters.&lt;/p&gt;

&lt;p&gt;These random parameters will mean the model isn’t good at estimating boot size, so we perform training. During training, these parameters are automatically changed to two new values that give better results.&lt;/p&gt;

&lt;p&gt;Diagram showing a model with 1.5 and 4 as the parameters.&lt;/p&gt;

&lt;p&gt;Exactly how this process works is something we will progressively explain throughout your learning journey.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>NumPy and Pandas</title>
      <dc:creator>Prazwal Ks</dc:creator>
      <pubDate>Tue, 05 Apr 2022 15:53:50 +0000</pubDate>
      <link>https://dev.to/prazwal/numpy-and-pandas-2b4o</link>
      <guid>https://dev.to/prazwal/numpy-and-pandas-2b4o</guid>
      <description>&lt;p&gt;Data scientists can use various tools and techniques to explore, visualize, and manipulate data. One of the most common ways in which data scientists work with data is to use the Python language and some specific packages for data processing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;NumPy&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
NumPy is a Python library that gives functionality comparable to mathematical tools such as MATLAB and R. While NumPy significantly simplifies the user experience, it also offers comprehensive mathematical functions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Pandas&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
Pandas is an extremely popular Python library for data analysis and manipulation. Pandas is like excel for Python - providing easy-to-use functionality for data tables.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Explore data in a Jupyter notebook&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
Jupyter notebooks are a popular way of running basic scripts using your web browser. Typically, these notebooks are a single webpage, broken up into text sections and code sections that are executed on the server rather than your local machine. This means you can get started quickly without needing to install Python or other tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;Testing hypotheses&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
Data exploration and analysis is typically an iterative process, in which the data scientist takes a sample of data and performs the following kinds of task to analyze it and test hypotheses:&lt;/p&gt;

&lt;p&gt;Clean data to handle errors, missing values, and other issues.&lt;br&gt;
Apply statistical techniques to better understand the data, and how the sample might be expected to represent the real-world population of data, allowing for random variation.&lt;br&gt;
Visualize data to determine relationships between variables, and in the case of a machine learning project, identify features that are potentially predictive of the label.&lt;br&gt;
Revise the hypothesis and repeat the process.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Ai</title>
      <dc:creator>Prazwal Ks</dc:creator>
      <pubDate>Sat, 02 Apr 2022 18:36:31 +0000</pubDate>
      <link>https://dev.to/prazwal/ai-2ac7</link>
      <guid>https://dev.to/prazwal/ai-2ac7</guid>
      <description>&lt;p&gt;Artificial Intelligence is the creation of software that imitates human behaviors and capabilities. &lt;br&gt;
The Key elements of &lt;strong&gt;AI&lt;/strong&gt; are:&lt;/p&gt;

&lt;p&gt;1.&lt;strong&gt;&lt;u&gt;Machine learning&lt;/u&gt;&lt;/strong&gt; - is the way we "teach" a computer model to make prediction and draw conclusions from data.&lt;/p&gt;

&lt;p&gt;2.&lt;strong&gt;&lt;u&gt;Anomaly detection&lt;/u&gt;&lt;/strong&gt; - The capability to automatically detect errors or unusual activity in a system.&lt;/p&gt;

&lt;p&gt;3.&lt;strong&gt;&lt;u&gt;Computer vision &lt;/u&gt;&lt;/strong&gt;- The capability of software to interpret the world visually through cameras, video, and images.&lt;/p&gt;

&lt;p&gt;4.&lt;strong&gt;&lt;u&gt;Natural language processing&lt;/u&gt;&lt;/strong&gt; - The capability for a computer to interpret written or spoken language, and respond in kind.&lt;/p&gt;

&lt;p&gt;5.&lt;strong&gt;&lt;u&gt;Conversational AI &lt;/u&gt;&lt;/strong&gt;- The capability of a software "agent" to participate in a conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;1.Machine Learning:-&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
One common question for everyone is how do machines learn?&lt;br&gt;
The answer is, from data. In today's world, we create huge volumes of data as we go about our everyday lives. &lt;/p&gt;

&lt;p&gt;From the text messages, emails, and social media posts we send to the photographs and videos we take on our phones, we generate massive amounts of information. More data still is created by millions of sensors in our homes, cars, cities, public transport infrastructure, and factories.&lt;/p&gt;

&lt;p&gt;Data scientists can use all of that data to train machine learning models that can make predictions and inferences based on the relationships they find in the data.&lt;/p&gt;

&lt;p&gt;Machine Learning is the foundation for most AI solutions.&lt;br&gt;
&lt;strong&gt;&lt;u&gt;&lt;br&gt;
2.Anamoly Detection:-&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
anomaly detection - a machine learning based technique that analyzes data over time and identifies unusual changes.&lt;/p&gt;

&lt;p&gt;Imagine you're creating a software system to monitor credit card transactions and detect unusual usage patterns that might indicate fraud. Or an application that tracks activity in an automated production line and identifies failures. Or a racing car telemetry system that uses sensors to proactively warn engineers about potential mechanical failures before they happen.&lt;br&gt;
These kinds of scenario can be addressed by using anomaly detection &lt;/p&gt;

&lt;p&gt;An anomaly detection model is trained to understand expected fluctuations in the telemetry measurements over time.&lt;br&gt;
If a measurement occurs outside of the normal expected range, the model reports an anomaly that can be used to alert the race engineer to call the driver in for a pit stop to fix the issue before it forces retirement from the race&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;3.Computer vision &lt;/u&gt;&lt;/strong&gt;solutions are based on machine learning models that can be applied to visual input from cameras, videos, or images.&lt;/p&gt;

&lt;p&gt;For Example, Object detection machine learning models are trained to classify individual objects within an image, and identify their location with a bounding box.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9-kgeFnQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8e1sh30blq9ysg9r2av.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9-kgeFnQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o8e1sh30blq9ysg9r2av.png" alt="Image description" width="400" height="266"&gt;&lt;/a&gt;&lt;br&gt;
Face detection is a specialized form of object detection that locates human faces in an image. This can be combined with classification and facial geometry analysis techniques to infer details such as age and emotional state; and even recognize individuals based on their facial features.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;4.Natural language processing (NLP)&lt;/u&gt;&lt;/strong&gt; is the area of AI that deals with creating software that understands written and spoken language.&lt;br&gt;
NLP enables you to create software that can:&lt;br&gt;
    •Analyze and interpret text in documents, email messages, and other sources.&lt;br&gt;
    •Interpret spoken language, and synthesize speech responses.&lt;br&gt;
    •Automatically translate spoken or written phrases between languages.&lt;br&gt;
    •Interpret commands and determine appropriate actions.&lt;/p&gt;

&lt;p&gt;For example, Starship Commander, is a virtual reality (VR) game from Human Interact, that takes place in a science fiction world. The game uses natural language processing to enable players to control the narrative and interact with in-game characters and starship systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;5.Conversational AI&lt;/u&gt;&lt;/strong&gt; is the term used to describe solutions where AI agents participate in conversations with humans. Most commonly, conversational AI solutions use bots to manage dialogs with users. These dialogs can take place through web site interfaces, email, social media platforms, messaging systems, phone calls, and other channels.&lt;/p&gt;

&lt;p&gt;Bots can be the basis of AI solutions for:&lt;br&gt;
     •Customer support for products or services.&lt;br&gt;
     •Reservation systems for restaurants, airlines, cinemas, and &lt;br&gt;
      other appointment based businesses.&lt;br&gt;
     •Health care consultations and self-diagnosis.&lt;br&gt;
     •Home automation and personal digital assistants.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
