<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Victor Uwaje</title>
    <description>The latest articles on DEV Community by Victor Uwaje (@victolonet).</description>
    <link>https://dev.to/victolonet</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/victolonet"/>
    <language>en</language>
    <item>
      <title>Introduction to Amazon Rekognition</title>
      <dc:creator>Victor Uwaje</dc:creator>
      <pubDate>Fri, 28 Jan 2022 19:12:31 +0000</pubDate>
      <link>https://dev.to/victolonet/introduction-to-amazon-rekognition-59e5</link>
      <guid>https://dev.to/victolonet/introduction-to-amazon-rekognition-59e5</guid>
      <description>&lt;p&gt;In this article, I would be discussing yet another wonderful feature of AWS which is Amazon Rekognition which has come to reduce the burden in computer vision tasks like object detection, image classification, etc.&lt;/p&gt;

&lt;p&gt;What really is Amazon Rekognition?&lt;/p&gt;

&lt;p&gt;Amazon Rekognition is one of the amazing services introduced by Amazon to integrate and process image and video processes that can be added to your applications.&lt;/p&gt;

&lt;p&gt;The images and videos are submitted to applications that perform some analysis to identify images, people, or contents in both the images and videos.&lt;/p&gt;

&lt;p&gt;Amazon Rekognition utilizes the deep learning technology with a large amount of labeled ground truth images and videos which has been trained by a convolution neural network. We all know that training a model from scratch is very computationally expensive and requires a lot of computing resources and time, so with the introduction of Amazon Rekognition, these problems are taking off you so that you can only focus on the application side of it.&lt;/p&gt;

&lt;p&gt;The Amazon Rekognition API is divided into two based API's:&lt;br&gt;
1) Image API&lt;br&gt;
2) Video API&lt;/p&gt;

&lt;p&gt;The Amazon Rekognition is also divided further into two aspects based on storage:&lt;br&gt;
1) Storage API based operations&lt;br&gt;
2) Non-storage API based operations&lt;/p&gt;

&lt;p&gt;Let us now discuss in detail the two APIs that we have in the Amazon Rekognition service.&lt;/p&gt;

&lt;p&gt;1) &lt;strong&gt;The Image Processing API&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The image processing APIs store the input image in either an Amazon s3 bucket or encoded in Base 64 and it comprises of the following API's:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Facial Analysis:&lt;/strong&gt;&lt;br&gt;
The facial detection API returns a list of 100 possible faces that matches that of an input image which includes also the facial composition and attributes of the face. The returned list consists of the emotional state of the face: if the person is either smiling or angry if the individual is wearing glasses or not, the range of age of the individual with their respective confidence score.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Face Comparison:&lt;/strong&gt;&lt;br&gt;
With the face comparison API, it returns if the person in an image is the same person in another image. It would return an ordered list of the 100 matched largest faces detected which is compared to the original image. The bounding boxes of the source and target images are returned with the confidence score that matches the original image.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Celebrity Detection:&lt;/strong&gt;&lt;br&gt;
The celebrity API is used to detect which celebrity a certain input image is mapped with. It returns the bounding box of the identified celebrity face, the id of the celebrity, the confidence score of the match with the celebrity, and a URL that has extra information about the celebrity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Text Extraction:&lt;/strong&gt;&lt;br&gt;
The text extraction API allows one to extract textual features from an image, for example, it can be used to extract the information of a person from an identity card, extract the details from a train ticket. It extracts the text with the geometry of the bounding box where the text was located in the image and also the confidence score of the text.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Content Moderation:&lt;/strong&gt;&lt;br&gt;
The content moderation API is used to identify whether an image consists of explicit or inappropriate content like explicit nudity or suggestive. This API returns if the input image consists of the inappropriate content and the confidence score.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Feature Extraction:&lt;/strong&gt;&lt;br&gt;
The feature extraction API is used to extract objects which are contained in an input image. The different objects in the input image are returned with the confidence score of the identified objects and thier names.&lt;/p&gt;

&lt;p&gt;2) &lt;strong&gt;Video Processing API&lt;/strong&gt;&lt;br&gt;
The video processing API is used to process streams of videos and it does its processing in an async manner because of the computing resources that are involved in the processing streams of videos which are stored in an s3 bucket. Below are the various API's that are involved in the processing of videos with the Amazon Rekognition:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Person Tracking:&lt;/strong&gt;&lt;br&gt;
The person tracking API is utilized to track people or persons in streams of videos. The async process firstly involves a StartPersonTracking which initiate the Job and sends a notification when it's done, then the GetPersonTracking utilizes the Job Id to detect the people or person in the stream of video which includes the facial features of the individual with the confidence score, the bounding box of the detected person per time within the video.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Face Detection:&lt;/strong&gt;&lt;br&gt;
The face detection API is utilized to detect faces in streams of videos. The async process firstly involves a StartFaceDetection which initiate the Job and sends a notification when it's done, then the GetFaceDetection utilizes the Job Id to detect the face in the stream of video which includes the facial features of the individual with the confidence score, the bounding box of the detected face per time within the video.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Label Detection:&lt;/strong&gt;&lt;br&gt;
The label detection API is utilized for objects in streams of videos. The async process firstly involves a StartLabelDetection which initiate the Job and sends a notification when it's done, then the GetLabelDetection utilizes the Job Id to detect the features or objects in the stream of video which includes the name of the object, the bounding boxes, and the confidence score per time within the video.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Celebrity Detection:&lt;/strong&gt;&lt;br&gt;
The celebrity detection API is used to detect celebrities in streams of video. The async process firstly involves a StartCelebrityDetection which initiates the Job and sends a notification when it's done, then the GetCelebrityDetection utilizes the Job Id to detect the celebrities in the stream of video with the confidence score, bounding box per time within the video.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Moderation Detection:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The moderation detection API is used to detect explicit or inappropriate content in a stream of input video. The async process firstly involves a StartContentModeration which initiates the Job and sends a notification when it's done, then the GetContentModeration utilizes the Job Id to detect inappropriate content in the stream of video with the confidence score and bounding box per time within the video.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>programming</category>
    </item>
    <item>
      <title>Understanding AWS Identity and Access Management (IAM)</title>
      <dc:creator>Victor Uwaje</dc:creator>
      <pubDate>Thu, 30 Dec 2021 14:40:23 +0000</pubDate>
      <link>https://dev.to/victolonet/understanding-aws-identity-and-access-management-iam-4apn</link>
      <guid>https://dev.to/victolonet/understanding-aws-identity-and-access-management-iam-4apn</guid>
      <description>&lt;p&gt;In this article, I would be writing about what the AWS IAM really is. The IAM stands for Identity and access management and consist of two parts:&lt;/p&gt;

&lt;p&gt;1) Identity Management: this is the process by which a user requires authenticated credentials to access the AWS account, in summary, you can call it the AWS account login details. It usually comprises of username alongside password, multifactor authentication (MFA), or federated access to the AWS account&lt;/p&gt;

&lt;p&gt;2) Acces Management: this is the authentication that allows a user that has been authenticated and logged into the AWS account to have access to the AWS resources like S3 bucket, EC2 instance, RDS, etc.&lt;/p&gt;

&lt;p&gt;The components of Identity and Access Management are:&lt;br&gt;
1) Users: this consists of different users in your organization who would access your AWS service&lt;br&gt;
2) Groups: they comprise multiple users who are defined by their similar job specifications or roles.&lt;br&gt;
3) Roles: they are objects which ensures the AWS resource has access to specific temporary permissions&lt;br&gt;
4) Policy: they are a set of control instructions that determine if a user, group, or roles are allowed or not allowed to use a resource and they are usually in a JSON format.&lt;/p&gt;

&lt;p&gt;It is to be duly noted that it is the best practice to assign permissions to groups other than roles as this would make work easier and reduce the level of administrative complexity. So the steps should be as below:&lt;/p&gt;

&lt;p&gt;1) Create a group&lt;br&gt;
2) Attach permission to the group&lt;br&gt;
3) Create a new user&lt;br&gt;
4) Assign the user to the group (the user automatically inherits the permissions from the group)&lt;br&gt;
5) Set up a new service role&lt;br&gt;
6) Apply the created service role to the AWS resource so the permissions can be utilized by the resource&lt;/p&gt;

&lt;p&gt;I would also like to discuss and expatiate further on the Policy part of the IAM. They are two types of policies, and they are:&lt;/p&gt;

&lt;p&gt;1) Managed policies: it can be subdivided into two parts:&lt;/p&gt;

&lt;p&gt;a) AWS Policy: they are policies that have been predefined by AWS&lt;br&gt;
b) Customer Policies: they are policies that are created by the user. They can be done by editing an already made AWS policy, using the policy generator, or by creating a customized policy in JSON format&lt;/p&gt;

&lt;p&gt;2) Inline Policy: they are policies that can only be added to a specific user, group, or role. &lt;/p&gt;

&lt;p&gt;It is to be noted that, the difference between the managed policies and the inline policies is that, the managed policies can be attached to multiple users, groups, or roles but the Inline policies can only be attached to a specific user, group, or roles.&lt;/p&gt;

&lt;p&gt;I have made mention of MFA above, but what really is MFA. It is an acronym that stands for Multi-Factor Authentication and is usually required as a security measure for some specific users who have access to large AWS resources.&lt;/p&gt;

&lt;p&gt;Lastly, before I bring this article to a conclusion, I would like to discuss what the Identity Federation is. Identity Federation allows users to access AWS resources without having IAM access. In order to use this functionality a trusted relationship is established between the identity provider (which performs the authentication of the user) and the AWS account. Some notable identity providers are Microsoft Active Directory, Open ID, etc. Identity Federation has the advantage that it reduces the amount of IAM access that is required to be created.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>machinelearning</category>
      <category>programming</category>
    </item>
    <item>
      <title>Introduction to Amazon Lex</title>
      <dc:creator>Victor Uwaje</dc:creator>
      <pubDate>Wed, 29 Dec 2021 15:30:48 +0000</pubDate>
      <link>https://dev.to/victolonet/introduction-to-amazon-lex-1hm0</link>
      <guid>https://dev.to/victolonet/introduction-to-amazon-lex-1hm0</guid>
      <description>&lt;p&gt;Amazon Lex is a wonderful addition by Amazon, which helps developers to build high standard conversational interfaces which can be integrated into either a new or existing application in a seamless manner.&lt;/p&gt;

&lt;p&gt;With the Amazon Lex, your customers would interact seamlessly with your applications by either using voice or text commands, for example, Chatbot for interacting with your customers in an easy fashion.&lt;/p&gt;

&lt;p&gt;Amazon Lex utilizes the same deep learning integration that Amazon Alexa, it utilizes the automatic speech recognition and natural language understanding functionality for the chatbot.&lt;/p&gt;

&lt;p&gt;Before going further into what the AMazon Lex is, I would like to briefly explain what a ChatBot application is. A ChatBot application is a program that simulates conversation like a human. &lt;/p&gt;

&lt;p&gt;Why utilize Amazon Lex and not a self-trained natural language algorithm? &lt;br&gt;
1) They are simple to be integrated into either new or existing applications&lt;br&gt;
2)The cost to effort ratio is quite reasonable as it is a pay-as-you-go model.&lt;br&gt;
3) They process both text and speech seamlessly&lt;br&gt;
4) They are easy to deploy to your applications and utilize.&lt;br&gt;
5) They have high scalability&lt;br&gt;
6) You can easily integrate it with other services on the AWS platform.&lt;/p&gt;

&lt;p&gt;It is worthwhile to note that all network connections with Amazon Lex are done with only HTTPS,  which is to ensure secured conversation which means your conversations are encrypted by Amazon Lex.&lt;/p&gt;

&lt;p&gt;I have mentioned above that the Amazon Lex is affordable in terms of the cost, but let's see the pricing model of the amazon Lex below:&lt;/p&gt;

&lt;p&gt;-- Per Voice request costs $0.004: so if your ChatBot processes about 5000 voice requests per month, you should expect to pay 5000 * 0.004 = $20 monthly&lt;br&gt;
-- Per Text request costs $0.00075: so if your chatbot processes about 5000 text requests per month, you should expect to pay 5000 * 0.00075 = $3.75 monthly&lt;/p&gt;

&lt;p&gt;With Amazon Lex, your cost and the monthly bill are calculated from the total processed requests by your application on a monthly basis.&lt;/p&gt;

&lt;p&gt;There is also a free Amazon Lex option that provides you to process free 10,000 texts and 5,000 voice requests monthly for the first year, this allows you to test its functionality within your application.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>datascience</category>
      <category>cloud</category>
      <category>programming</category>
    </item>
    <item>
      <title>Introduction to AWS Sagemaker</title>
      <dc:creator>Victor Uwaje</dc:creator>
      <pubDate>Tue, 28 Dec 2021 11:31:07 +0000</pubDate>
      <link>https://dev.to/victolonet/introduction-to-amazon-sagemaker-do7</link>
      <guid>https://dev.to/victolonet/introduction-to-amazon-sagemaker-do7</guid>
      <description>&lt;p&gt;In this post, I am going to be writing about the Aws Sagemaker as a general overview. I have found Sagemaker to be very useful in my day-to-day machine learning life cycle, that is why I decided to write about it. &lt;br&gt;
AWS Sagemaker is an end-to-end fully managed service that helps machine learning engineers or data scientists to build, train and deploy machine learning models on the cloud with less stress.&lt;/p&gt;

&lt;p&gt;The AWS Sagemaker is part of the AWS platform services which is a form of the platform as a service (PAAS). It consists of the tools to perform an end-to-end machine learning life cycle, but also requires you to have enough knowledge about machine learning in order to obtain good results through model fine-tuning.&lt;/p&gt;

&lt;p&gt;As Machine Learning practitioner, you know that the machine learning lifecycle consists of the following:&lt;br&gt;
1) Data Preparation: this entails extracting, transforming, and loading your input data&lt;br&gt;
2) Model Building: this consists of using cross-validation methods to select the right model for your use case&lt;br&gt;
3) Model Training and fine-tuning: this consists of training, debugging and fine-tuning your models for better results&lt;br&gt;
4) Model Deployment and Monitoring: this consists of deploying your trained models to production and also monitoring how the model is performing in production.&lt;/p&gt;

&lt;p&gt;So how does the AWS Sagemaker fit into the above Machine Learning life cycle?&lt;br&gt;
&lt;strong&gt;Data Preparation&lt;/strong&gt;: you can use the s3 buckets for collecting and storing the data and also the ETL process can be applied to the stored data in the Bucket. The &lt;strong&gt;Groundtruth&lt;/strong&gt; section of the AWS Sagemaker allows you to annotate your input dataset by utilizing some workflows or it also provides a workforce for labeling of data. It offers ground truth labeling for a variety of input data which can help you speed up the data preparation stage of a machine learning lifecycle which is always presumed to be the most tedious stage in the machine learning process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model Building and Training&lt;/strong&gt;: the Sagemaker notebooks and training functionality are both used for model building. The choice of which to use is dependent on the size of the input data ( i would write a more detailed practical  article on the application of using either the notebooks or training functionality to build and train your ML models)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model Deployment and Monitoring&lt;/strong&gt;: The Inference section of the Sagemaker is responsible for the deployment of your trained models. Firstly you would need to make an endpoint that can be utilized to call the model either with  HTTP or HTTPS.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
      <category>datascience</category>
      <category>cloud</category>
    </item>
  </channel>
</rss>
