<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kavish Sanghvi</title>
    <description>The latest articles on DEV Community by Kavish Sanghvi (@kavishsanghvi).</description>
    <link>https://dev.to/kavishsanghvi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kavishsanghvi"/>
    <language>en</language>
    <item>
      <title>Deploying web application on CI/CD Pipeline using Heroku and GitHub Actions</title>
      <dc:creator>Kavish Sanghvi</dc:creator>
      <pubDate>Fri, 29 Apr 2022 00:12:57 +0000</pubDate>
      <link>https://dev.to/kavishsanghvi/deploying-web-application-on-cicd-pipeline-using-heroku-and-github-actions-714</link>
      <guid>https://dev.to/kavishsanghvi/deploying-web-application-on-cicd-pipeline-using-heroku-and-github-actions-714</guid>
      <description>&lt;p&gt;GitHub Actions provide us with a new way to deploy to Heroku and integrate Heroku into other aspects of our development workflows. Linting our Docker file and package configs, building and testing the package on multiple environments, compiling release notes, and publishing our app to Heroku could all be done in a single GitHub Actions workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why GitHub Actions and Heroku?
&lt;/h2&gt;

&lt;p&gt;Deployment via GitHub Actions and Heroku is the simplest method, which is why the platform was chosen. With Heroku, there is no need to learn about server configuration, network management, or database tuning. Heroku removes roadblocks, allowing developers to focus on creating applications. The user can easily select the subscription plan that best suits their needs, and Heroku is a pay-what-you-use, per-second product. Heroku is extremely adaptable. If a user’s application is experiencing high HTTP traffic, simply scale the dynos in your application. GitHub Actions provides a continuous integration and continuous delivery platform that allows you to automate your build, test, and deployment pipelines along with support for multiple coding languages, allowing for a diverse set of applications to be deployed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is CI/CD?
&lt;/h2&gt;

&lt;p&gt;Continuous integration generates a build for newly pushed code to the repository. It is a practice in which developers merge their code changes into a central repository on a regular basis, after which automated builds and tests are run. The practice of automatically deploying code to production is referred to as continuous deployment. As a result, changes to an application can be deployed into production more quickly than manual deployment because you make small changes to the application with each deployment, continuous deployment can result in safer deployments. It can be quickly determined which deployment is to blame for any bugs in the application. We don’t have to wait for someone else to push a button and send changes to production when we use continuous deployment. And we’re not consolidating all our changes into a single release. Instead, we use continuous deployment to deploy every change we push to our main branch if our automated checks pass.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is GitHub Actions?
&lt;/h2&gt;

&lt;p&gt;GitHub Actions allow you to build custom workflows in response to GitHub events. A push, for example, could initiate Continuous Integration (CI), a new issue could initiate a bot response, or a merged pull request could initiate a deployment. Workflows are constructed from jobs. A runner performs these tasks in a virtual environment. Jobs are made up of individual steps, which can be scripts or actions run on the runner. Actions are modular workflow units that can be created in Docker containers, placed directly in your repository, or included via the GitHub Marketplace or Docker registry.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup Continuous Integration
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the Actions option from the toolbar in your project’s GitHub repository.&lt;/li&gt;
&lt;li&gt;Choose the new workflow option. If this is the first time you create an Action, you can ignore this step and move on to step 3.&lt;/li&gt;
&lt;li&gt;Choose the Node.js template from the options for continuous integration workflows&lt;/li&gt;
&lt;li&gt;GitHub will create a node.js.yml file for your repository. Choose the start commit option and commit the new file to your repository. This starts the workflow run and completes the continuous integration setup.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Node.js CI

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest

    strategy:
      matrix:
        node-version: [14.x]

    steps:
    - uses: actions/checkout@v2
    - name: Use Node.js ${{ matrix.node-version }}
      uses: actions/setup-node@v2
      with:
        node-version: ${{ matrix.node-version }}
        cache: 'npm'
    - run: npm ci
    - run: npm run build --if-present
    - run: npm run lint:check &amp;amp; npm test
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Setup Continuous Delivery
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Choose the Account Settings option from your Heroku dashboard.&lt;/li&gt;
&lt;li&gt;Navigate to the API Key section and click on Reveal.&lt;/li&gt;
&lt;li&gt;Copy the revealed API Key.&lt;/li&gt;
&lt;li&gt;Return to your GitHub repository and choose the Settings option from the toolbar.&lt;/li&gt;
&lt;li&gt;Choose the Secrets option from the sidebar and add a new repository secret.&lt;/li&gt;
&lt;li&gt;To add a new repository secret, you must provide a name and a value. You can use HEROKU_API_KEY for the name field and paste the API Key you copied from Heroku into the value field.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The final step is to deploy the code to Heroku after the continuous delivery pipeline has been created.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy code
&lt;/h2&gt;

&lt;p&gt;The code can be deployed in a variety of ways. Connecting your GitHub repository to Heroku and enabling auto-deployment is the most common and straightforward method. Due to various security issues, Heroku has removed OAuth authentication support, which allowed Heroku to connect with GitHub.&lt;/p&gt;

&lt;p&gt;An alternative method is to deploy your app to Heroku using the Heroku CLI. You must first install the Heroku CLI on your local machine. Once installed, you can use the following commands to deploy your app to Heroku:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;heroku login
heroku git:clone -a app-name
git push heroku main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After manually deploying the app, you must make a few changes to the existing node.js.yml file so that the changes are automatically deployed to Heroku when you push them to the main branch of your GitHub repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;deploy:
    needs: build
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - uses: akhileshns/heroku-deploy@v3.12.12
        with:
          heroku_api_key: ${{secrets.HEROKU_API_KEY}}
          heroku_app_name: "app-name"
          heroku_email: "example@emal.com"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These lines contain instructions for the deploy, which is responsible for automatically deploying the changes you push to your GitHub repository.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdipgvlny3n2faapjzey6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdipgvlny3n2faapjzey6.png" alt=" " width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzely1aop7m05siqa6zte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzely1aop7m05siqa6zte.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When the continuous deployment pipeline is fully configured, any code that you push to the branch of your GitHub repository that is linked to the deployment will be automatically deployed and reflected on your web application. In your GitHub repository, each deployment appears as a separate category under the Actions option. Deploying this application provided me with first-hand experience in configuring the environment, pipelines, and branches. This enabled us to successfully deploy our application as well as troubleshoot any errors that arose. To summarize, automating deployment is extremely beneficial because it is error-free and deploys as soon as a commit is merged, allowing application changes to be live in production in no time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Contact
&lt;/h2&gt;

&lt;p&gt;&lt;a href="mailto:sanghvi_kavish@yahoo.in"&gt;Email&lt;/a&gt; | &lt;a href="https://www.linkedin.com/in/kavishsanghvi" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; | &lt;a href="https://kavishsanghvi.tech" rel="noopener noreferrer"&gt;Website&lt;/a&gt; | &lt;a href="https://www.medium.com/@kavishsanghvi" rel="noopener noreferrer"&gt;Medium&lt;/a&gt; | &lt;a href="https://kavishsanghviblog.wordpress.com" rel="noopener noreferrer"&gt;Blog&lt;/a&gt; | &lt;a href="https://twitter.com/kavishsanghvi25" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; | &lt;a href="https://www.facebook.com/kavish.sanghvi.5" rel="noopener noreferrer"&gt;Facebook&lt;/a&gt; | &lt;a href="https://www.instagram.com/kavishsanghvi96" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Thank you for reading my article. Please like, share, and comment if you liked it or found it useful.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>heroku</category>
      <category>github</category>
      <category>deployment</category>
      <category>cicdpipeline</category>
    </item>
    <item>
      <title>Image Classification Techniques</title>
      <dc:creator>Kavish Sanghvi</dc:creator>
      <pubDate>Mon, 22 Jun 2020 09:23:10 +0000</pubDate>
      <link>https://dev.to/kavishsanghvi/image-classification-techniques-50ma</link>
      <guid>https://dev.to/kavishsanghvi/image-classification-techniques-50ma</guid>
      <description>&lt;p&gt;Image classification refers to a process in computer vision that can classify an image according to its visual content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Today, with the increasing volatility, necessity, and applications of artificial intelligence, fields like machine learning, and its subsets, deep learning, and neural networks have gained immense momentum. The training needs software and tools like classifiers, which feed huge amount of data, analyze them, and extract useful features. The intent of the classification process is to categorize all pixels in a digital image into one of several classes. Normally, multi-spectral data are used to perform the classification and, indeed, the spectral pattern present within the data for each pixel is used as the numerical basis for categorization. The objective of image classification is to identify and portray, as a unique gray level (or color), the features occurring in an image in terms of the object these features actually represent on the ground. Image classification is perhaps the most important part of digital image analysis. Classification between objects is a complex task and therefore image classification has been an important task within the field of computer vision. Image classification refers to the labelling of images into one of a number of predefined classes. There are potentially n number of classes in which a given image can be classified. Manually checking and classifying images could be a tedious task especially when they are massive in number and therefore it will be very useful if we could automate this entire process using computer vision. The advancements in the field of autonomous driving also serve as a great example of the use of image classification in the real-world. The applications include automated image organization, stock photography and video websites, visual search for improved product discoverability, large visual databases, image and face recognition on social networks, and many more; which is why, we need classifiers to achieve maximum possible accuracy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Structure for performing Image Classification&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Image Pre-processing: The aim of this process is to improve the image data (features) by suppressing unwanted distortions and enhancement of some important image features so that the computer vision models can benefit from this improved data to work on. Steps for image pre-processing includes Reading image, Resizing image, and Data Augmentation (Gray scaling of image, Reflection, Gaussian Blurring, Histogram, Equalization, Rotation, and Translation).&lt;/li&gt;
&lt;li&gt;Detection of an object: Detection refers to the localization of an object which means the segmentation of the image and identifying the position of the object of interest.&lt;/li&gt;
&lt;li&gt;Feature extraction and training: This is a crucial step wherein statistical or deep learning methods are used to identify the most interesting patterns of the image, features that might be unique to a particular class and that will, later on, help the model to differentiate between different classes. This process where the model learns the features from the dataset is called model training.&lt;/li&gt;
&lt;li&gt;Classification of the object: This step categorizes detected objects into predefined classes by using a suitable classification technique that compares the image patterns with the target patterns.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Supervised Classification&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Supervised classification is based on the idea that a user can select sample pixels in an image that are representative of specific classes and then direct the image processing software to use these training sites as references for the classification of all other pixels in the image. Training sites (also known as testing sets or input classes) are selected based on the knowledge of the user. The user also sets the bounds for how similar other pixels must be to group them together. These bounds are often set based on the spectral characteristics of the training area. The user also designates the number of classes that the image is classified into. Once a statistical characterization has been achieved for each information class, the image is then classified by examining the reflectance for each pixel and making a decision about which of the signatures it resembles most. Supervised classification uses classification algorithms and regression techniques to develop predictive models. The algorithms include linear regression, logistic regression, neural networks, decision tree, support vector machine, random forest, naive Bayes, and k-nearest neighbor.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unsupervised Classification&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unsupervised classification is where the outcomes (groupings of pixels with common characteristics) are based on the software analysis of an image without the user providing sample classes. The computer uses techniques to determine which pixels are related and groups them into classes. The user can specify which algorithm the software will use and the desired number of output classes but otherwise does not aid in the classification process. However, the user must have knowledge of the area being classified when the groupings of pixels with common characteristics produced by the computer have to be related to actual features on the ground. Some of the most common algorithms used in unsupervised learning include cluster analysis, anomaly detection, neural networks, and approaches for learning latent variable models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Convolutional Neural Network&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Convolutional Neural Network (CNN, or ConvNet) are a special kind of multi-layer neural networks, designed to recognize visual patterns directly from pixel images with minimal pre-processing. It is a special architecture of artificial neural networks. Convolutional neural network uses some of its features of visual cortex and have therefore achieved state of the art results in computer vision tasks. Convolutional neural networks are comprised of two very simple elements, namely convolutional layers and pooling layers. Although simple, there are near-infinite ways to arrange these layers for a given computer vision problem. The elements of a convolutional neural network, such as convolutional and pooling layers, are relatively straightforward to understand. The challenging part of using convolutional neural networks in practice is how to design model architectures that best use these simple elements. The reason why convolutional neural network is hugely popular is because of their architecture, the best thing is there is no need of feature extraction. The system learns to do feature extraction and the core concept is, it uses convolution of image and filters to generate invariant features which are passed on to the next layer. The features in next layer are convoluted with different filters to generate more invariant and abstract features and the process continues till it gets final feature/output which is invariant to occlusions. The most commonly used architectures of convolutional neural network are LeNet, AlexNet, ZFNet, GoogLeNet, VGGNet, and ResNet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Artificial Neural Network&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Inspired by the properties of biological neural networks, Artificial Neural Networks are statistical learning algorithms and are used for a variety of tasks, from relatively simple classification tasks to computer vision and speech recognition. Artificial neural networks are implemented as a system of interconnected processing elements, called nodes, which are functionally analogous to biological neurons. The connections between different nodes have numerical values, called weights, and by altering these values in a systematic way, the network is eventually able to approximate the desired function. The hidden layers can be thought of as individual feature detectors, recognizing more and more complex patterns in the data as it is propagated throughout the network. For example, if the network is given a task to recognize a face, the first hidden layer might act as a line detector, the second hidden takes these lines as input and puts them together to form a nose, the third hidden layer takes the nose and matches it with an eye and so on, until finally the whole face is constructed. This hierarchy enables the network to eventually recognize very complex objects. The different types of artificial neural network are convolutional neural network, feedforward neural network, probabilistic neural network, time delay neural network, deep stacking network, radial basis function network, and recurrent neural network.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Support Vector Machine&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Support vector machines (SVM) are powerful yet flexible supervised machine learning algorithms which are used both for classification and regression. Support vector machines have their unique way of implementation as compared to other machine learning algorithms. They are extremely popular because of their ability to handle multiple continuous and categorical variables. Support Vector Machine model is basically a representation of different classes in a hyperplane in multidimensional space. The hyperplane will be generated in an iterative manner by support vector machine so that the error can be minimized. The goal is to divide the datasets into classes to find a maximum marginal hyperplane. It builds a hyper-plane or a set of hyper-planes in a high dimensional space and good separation between the two classes is achieved by the hyperplane that has the largest distance to the nearest training data point of any class. The real power of this algorithm depends on the kernel function being used. The most commonly used kernels are linear kernel, gaussian kernel, and polynomial kernel.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;K-Nearest Neighbor&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;K-Nearest Neighbor is a non-parametric method used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space. It is by far the simplest algorithm. It is a non-parametric, lazy learning algorithm, where the function is only approximated locally and all computation is deferred until function evaluation. This algorithm simply relies on the distance between feature vectors and classifies unknown data points by finding the most common class among the k-closest examples. In order to apply the k-nearest Neighbor classification, we need to define a distance metric or similarity function, where the common choices include the Euclidean distance and Manhattan distance. The output is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Condensed nearest neighbor (CNN, the Hart algorithm) is an algorithm designed to reduce the data set for K-Nearest Neighbor classification.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Naïve Bayes Algorithm&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Naive Bayes classifiers are a collection of classification algorithms based on Bayes’ Theorem. It is not a single algorithm but a family of algorithms where all of them share a common principle, i.e. every pair of features being classified is independent of each other. Naive Bayes is a simple technique for constructing classifiers: models that assign class labels to problem instances, represented as vectors of feature values, where the class labels are drawn from some finite set. All naive bayes classifiers assume that the value of a particular feature is independent of the value of any other feature, given the class variable. Naive Bayes algorithm is a fast, highly scalable algorithm, which can be used for binary and multi-class classification. It depends on doing a bunch of counts. It is a popular choice for text classification, spam email classification, etc. It can be easily trained on small dataset. It has limitation as it considers all the features to be unrelated, so it cannot learn the relationship between features. Naive Bayes can learn individual features importance but can’t determine the relationship among features. Different types of naïve bayes algorithms are gaussian naïve bayes, multinomial naïve bayes, and bernoulli naïve bayes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Random Forest Algorithm&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Random forest is a supervised learning algorithm which is used for both classification as well as regression. As we know that a forest is made up of trees and more trees means more robust forest, similarly, random forest algorithm creates decision trees on data samples and then gets the prediction from each of them and finally selects the best solution by means of voting. It is an ensemble method which is better than a single decision tree because it reduces the over-fitting by averaging the result. The random forest is a classification algorithm consisting of many decision trees. It uses bagging and feature randomness when building each individual tree to try to create an uncorrelated forest of trees whose prediction by committee is more accurate than that of any individual tree.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contact&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="mailto:info@kavishsanghvi.tech"&gt;Email&lt;/a&gt; | &lt;a href="https://www.linkedin.com/in/kavishsanghvi" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; | &lt;a href="https://kavishsanghvi.tech" rel="noopener noreferrer"&gt;Website&lt;/a&gt; | &lt;a href="https://www.medium.com/@kavishsanghvi" rel="noopener noreferrer"&gt;Medium&lt;/a&gt; | &lt;a href="https://kavishsanghviblog.wordpress.com" rel="noopener noreferrer"&gt;Blog&lt;/a&gt; | &lt;a href="https://twitter.com/kavishsanghvi25" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; | &lt;a href="https://www.facebook.com/kavish.sanghvi.5" rel="noopener noreferrer"&gt;Facebook&lt;/a&gt; | &lt;a href="https://www.instagram.com/kavishsanghvi96" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you for reading my article. Please like, share, and comment if you liked it or found it useful.&lt;/p&gt;

</description>
      <category>imageclassification</category>
      <category>classifiers</category>
      <category>convolutionalneuralnetwork</category>
      <category>classificationtechniques</category>
    </item>
    <item>
      <title>Artificial Intelligence in Healthcare</title>
      <dc:creator>Kavish Sanghvi</dc:creator>
      <pubDate>Mon, 22 Jun 2020 08:13:47 +0000</pubDate>
      <link>https://dev.to/kavishsanghvi/artificial-intelligence-in-healthcare-13ln</link>
      <guid>https://dev.to/kavishsanghvi/artificial-intelligence-in-healthcare-13ln</guid>
      <description>&lt;p&gt;Artificial Intelligence is getting increasingly sophisticated at doing what humans do, but more efficiently, more quickly and at a lower cost. The potential for artificial intelligence in healthcare is vast, and are increasingly a part of our healthcare eco-system. When many of us hear the term “artificial intelligence”, we imagine robots doing our jobs, rendering people obsolete, and since artificial intelligence-driven computers are programmed to make decisions with little human intervention, some wonder if machines will soon make the difficult decisions we now entrust to our doctors. Artificial intelligence in healthcare mainly refers to doctors and hospitals accessing vast data sets of potentially life-saving information, this includes treatment methods and their outcomes, survival rates, and speed of care gathered across millions of patients, geographical locations, and innumerable and sometimes interconnected health conditions. New computing power can detect and analyze large and small trends from the data and even make predictions through machine learning that’s designed to identify potential health outcomes.&lt;/p&gt;

&lt;p&gt;Artificial intelligence in healthcare is the use of complex algorithms and software to emulate human cognition in the analysis, interpretation, and comprehension of complicated medical and healthcare data. Artificial intelligence is the ability of computer algorithms to approximate conclusions without direct human input. What distinguishes artificial intelligence technology from traditional technologies in health care is the ability to gain information, process it, and give a well-defined output to the end-user. Artificial intelligence does this through machine learning algorithms and deep learning, these algorithms can recognize patterns in behavior and create their own logic. In order to reduce the error margin, artificial intelligence algorithms need to be tested repeatedly. Artificial intelligence algorithms behave differently from humans in two ways: (1) algorithms are literal: if you set a goal, the algorithm can’t adjust itself and only understand what it has been told explicitly; (2) and some deep learning algorithms are black boxes, the algorithms can predict extremely precise, but not the cause or the why. The primary aim of health-related artificial intelligence applications is to analyze relationships between prevention or treatment techniques and patient outcomes. AI programs have been developed and applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. Large technology companies such as IBM and Google, have developed artificial intelligence algorithms for healthcare. Additionally, hospitals are looking to artificial intelligence software to support operational initiatives that increase cost-saving, improve patient satisfaction, and satisfy their staffing and workforce needs. Companies are developing predictive analytics solutions that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing the length of stay, and optimizing staffing levels.&lt;/p&gt;

&lt;p&gt;Artificial intelligence simplifies the lives of patients, doctors, and hospital administrators by performing tasks that are typically done by humans, but in less time and at a fraction of the cost. One of the world’s highest-growth industries, the Artificial Intelligence sector was valued at about $600 million in 2014 and is projected to reach $150 billion by 2026. Undoubtedly, there is no other industry that artificial intelligence has touched so heavily as the healthcare industry. It all comes down to letting the medical practitioners do best what they are good at creativity, and let the machines do what they do great: precision and routine tasks as this will create a win-win situation for everyone involved. According to Accenture analysis, when combined, key clinical health artificial intelligence applications can potentially create $150 billion in annual savings for the American healthcare economy by 2026. Artificial intelligence has countless applications in healthcare, whether it’s being used to discover links between genetic codes, to power surgical robots, or even to maximize hospital efficiency; artificial intelligence has been a boon to the healthcare industry. One in three misdiagnoses results in serious injury or death. An estimated 40,000 to 80,000 deaths occur each year in the American hospitals related to misdiagnosis, and an estimated 12 million Americans suffer a diagnostic error each year in a primary care setting — 33% of which result in serious or permanent damage or death. In light of that, the promise of improving the diagnostic process is one of the artificial intelligence’s most exciting healthcare applications. Incomplete medical histories and large caseloads can lead to deadly human errors, immune to those variables, artificial intelligence can predict and diagnose the disease at a faster rate than most medical professionals. Let’s understand how artificial intelligence is changing the healthcare industry.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developing new medicines with Artificial Intelligence&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The drug development industry is bogged down by skyrocketing development costs and research that takes thousands of human hours. It costs about $2.6 billion to put each drug through clinical trials, and only 10% of those drugs are successfully brought to market. Due to breakthroughs in technology, biopharmaceutical companies are quickly taking notice of the efficiency, accuracy, and knowledge that artificial intelligence can provide. One of the biggest artificial intelligence breakthroughs in drug development came in 2007 when researchers tasked a robot named Adam with researching functions of yeast. Adam scoured billions of data points in public databases to hypothesize about the functions of 19 genes within yeast, predicting 9 new and accurate hypotheses. Adam’s robot friend, Eve, discovered that triclosan, a common ingredient in toothpaste, can combat malaria-based parasites.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streamlining Patient Experience with Artificial Intelligence&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the healthcare industry, time is money. Efficiently providing a seamless patient experience allows hospitals, clinics, and physicians to treat more patients on a daily basis. American hospitals saw more than 35 million patients in 2016, each with different ailments, insurance coverage, and conditions that factor into providing service. A 2016 study of 35,000 physician reviews revealed 96% of patient complaints are about lack of customer service, confusion over paperwork, and negative front desk experiences. New innovations in artificial intelligence healthcare technology are streamlining the patient experience, helping hospital staff process millions, if not billions of data points, faster and more efficiently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mining and Managing Medical Data with Artificial Intelligence&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Healthcare is widely considered one of the next big data frontiers to tame. Highly valuable information can sometimes get lost among the forest of trillions of data points, losing the industry around $100 billion a year. Additionally, the inability to connect important data points is slowing the development of new drugs, preventative medicine, and proper diagnosis. Many in healthcare are turning to artificial intelligence as a way to stop the data hemorrhaging. The technology breaks down data silos and connects in minutes information that used to take years to process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Artificial Intelligence Robot-Assisted Surgery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Popularity in robot-assisted surgery is skyrocketing. Hospitals are using robots to help with everything from minimally-invasive procedures to open-heart surgery. Robots help doctors perform complex procedures with precision, flexibility, and control that go beyond human capabilities. Robots equipped with cameras, mechanical arms, and surgical instruments augment the experience, skill, and knowledge of doctors to create a new kind of surgery. Surgeons control the mechanical arms while seated at a computer console while the robot gives the doctor a three-dimensional, magnified view of the surgical site that surgeons could not get from relying on their eyes alone. The surgeon then leads other team members who work closely with the robot through the entire operation. Robot-assisted surgeries have led to fewer surgery-related complications, less pain, and quicker recovery time.&lt;/p&gt;

&lt;p&gt;Concluding, the best opportunities for artificial intelligence in healthcare over the next few years are hybrid models, where clinicians are supported in diagnosis, treatment planning, and identifying risk factors, but retain ultimate responsibility for the patient’s care. This will result in faster adoption by healthcare providers by mitigating perceived risk and start to deliver measurable improvements in patient outcomes and operational efficiency at scale. Artificial intelligence in healthcare is already changing the patient experience, how clinicians practice medicine, and how the pharmaceutical industry operates, it is just the beginning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Contact&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="mailto:info@kavishsanghvi.tech"&gt;Email&lt;/a&gt; | &lt;a href="https://www.linkedin.com/in/kavishsanghvi" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; | &lt;a href="https://kavishsanghvi.tech" rel="noopener noreferrer"&gt;Website&lt;/a&gt; | &lt;a href="https://www.medium.com/@kavishsanghvi" rel="noopener noreferrer"&gt;Medium&lt;/a&gt; | &lt;a href="https://kavishsanghviblog.wordpress.com" rel="noopener noreferrer"&gt;Blog&lt;/a&gt; | &lt;a href="https://twitter.com/kavishsanghvi25" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; | &lt;a href="https://www.facebook.com/kavish.sanghvi.5" rel="noopener noreferrer"&gt;Facebook&lt;/a&gt; | &lt;a href="https://www.instagram.com/kavishsanghvi96" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you for reading my article. Please like, share, and comment if you liked it or found it useful.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>healthcare</category>
      <category>aiinhealthcare</category>
      <category>healthcaretechnology</category>
    </item>
    <item>
      <title>Fauna Image Classification using Convolutional Neural Network</title>
      <dc:creator>Kavish Sanghvi</dc:creator>
      <pubDate>Fri, 22 May 2020 14:06:42 +0000</pubDate>
      <link>https://dev.to/kavishsanghvi/fauna-image-classification-using-convolutional-neural-network-45cp</link>
      <guid>https://dev.to/kavishsanghvi/fauna-image-classification-using-convolutional-neural-network-45cp</guid>
      <description>&lt;h2&gt;
  
  
  Aim
&lt;/h2&gt;

&lt;p&gt;Aim of the project was to develop an animal image classifier in dense forest environments to achieve the desired accuracy, and aid ecologists and researchers in neural network, artificial intelligence, and zoological domains to further study and/or improve habitat, environmental and extinction patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  How I built it?
&lt;/h2&gt;

&lt;p&gt;We present a methodology for the classification of fauna images, which will help ecologists and scientists to further study and/or improve habitat, environmental, and extinction patterns. We have used a Convolutional Neural Network with Leaky ReLU activation function and VGG16 architecture for our model. The initial step taken by the system aims at the creation of features with VGG16 model. Application of Image Processing along with Loading, Testing, Training, and Validating the dataset before the training step helps to remove the noise, obstacles, distortion and dirt from the images. The next step uses Convolutional Neural Network along with Leaky ReLU to train the model to accurately and precisely classify animal classes. In order to avoid the problem of Dying ReLU, where some ReLU neurons essentially die for all inputs and remain inactive no matter what input is supplied, here no gradient flows and if a large number of dead neurons are there in a neural network its performance is affected. To resolve this issue, we make use of what is called Leaky ReLU, where slope is changed left of x=0 and thus causing a leak and extending the range of ReLU. After training the model, we graph the model’s training and validation accuracy and loss to have insights about how well the model is trained. Lesser the loss, more is the accuracy. The next step is to generate classification matrix and confusion matrix to have exact details about how correctly the model is trained and classifying, as we cannot only rely on the accuracy. Lastly, we tested our model with sample data and found it to be accurately classified.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dataset
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.kaggle.com/alessiocorrado99/animals10" rel="noopener noreferrer"&gt;Animal-10 dataset&lt;/a&gt;, which contains around 26179 hand-picked images of animals such as Butterfly, Cat, Chicken, Cow, Dog, Elephant, Horse, Sheep, Spyder, and Squirrel. Image count for each category varies from 2000 to 5000 images. Dataset was made available open-source through Kaggle.&lt;/p&gt;

&lt;h2&gt;
  
  
  Research Publication
&lt;/h2&gt;

&lt;p&gt;The research paper &lt;a href="http://sersc.org/journals/index.php/IJFGCN/article/view/17733" rel="noopener noreferrer"&gt;"Fauna Image Classification using Convolutional Neural Network"&lt;/a&gt; is published by Science and Engineering Research Support soCiety in the International Journal of Future Generation Communication and Networking, indexed by Web of Science, J-Gate, Directory of Open Access Journals, and ProQuest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Link to Code
&lt;/h2&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/kavishsanghvi" rel="noopener noreferrer"&gt;
        kavishsanghvi
      &lt;/a&gt; / &lt;a href="https://github.com/kavishsanghvi/fauna-image-classification-using-convolutional-neural-network" rel="noopener noreferrer"&gt;
        fauna-image-classification-using-convolutional-neural-network
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Convolutional neural network for classification of animal images on Animal-10 dataset
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Fauna Image Classification using Convolutional Neural Network&lt;/h1&gt;
&lt;/div&gt;
&lt;p&gt;Convolutional neural network for classification of animal images from Animal-10 dataset&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Aim&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;Aim of the project was to develop an animal image classifier in dense forest environments to achieve desired accuracy, and aid ecologists and researchers in neural network/Artificial Intelligence &amp;amp; zoological domains to further study and/or improve habitat, environmental and extinction patterns.&lt;/p&gt;
&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Method&lt;/h2&gt;
&lt;/div&gt;
&lt;p&gt;We present a methodology for the classification of fauna images, which will help ecologist and scientists to further study and/or improve habitat, environmental and extinction patterns. We have used Convolutional Neural Network with Leaky ReLU activation function and VGG16 architecture for our model. The initial step taken by the system aims at creation of features with VGG16 model. Application of Image Processing along with Loading, Testing, Training, and Validating the dataset before the training step helps to remove the noise, obstacles, distortion and dirt from the images. The next…&lt;/p&gt;
&lt;/div&gt;
  &lt;/div&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/kavishsanghvi/fauna-image-classification-using-convolutional-neural-network" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;


&lt;h2&gt;
  
  
  Contact
&lt;/h2&gt;

&lt;p&gt;&lt;a href="mailto:info@kavishsanghvi.tech"&gt;Email&lt;/a&gt; | &lt;a href="https://www.linkedin.com/in/kavishsanghvi" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; | &lt;a href="https://kavishsanghvi.tech" rel="noopener noreferrer"&gt;Website&lt;/a&gt; | &lt;a href="https://www.medium.com/@kavishsanghvi" rel="noopener noreferrer"&gt;Medium&lt;/a&gt; | &lt;a href="https://twitter.com/kavishsanghvi25" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; | &lt;a href="https://www.facebook.com/kavish.sanghvi.5" rel="noopener noreferrer"&gt;Facebook&lt;/a&gt; | &lt;a href="https://www.instagram.com/kavishsanghvi96" rel="noopener noreferrer"&gt;Instagram&lt;/a&gt;&lt;/p&gt;

</description>
      <category>octograd2020</category>
    </item>
  </channel>
</rss>
