<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Karim Elkobrossy</title>
    <description>The latest articles on DEV Community by Karim Elkobrossy (@complex).</description>
    <link>https://dev.to/complex</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/complex"/>
    <language>en</language>
    <item>
      <title>Amazon SageMaker GroundTruth (Object Detection)</title>
      <dc:creator>Karim Elkobrossy</dc:creator>
      <pubDate>Sat, 23 Apr 2022 14:32:03 +0000</pubDate>
      <link>https://dev.to/aws-builders/amazon-sagemaker-groundtruth-object-detection-2am5</link>
      <guid>https://dev.to/aws-builders/amazon-sagemaker-groundtruth-object-detection-2am5</guid>
      <description>&lt;p&gt;Now let’s talk more about the object detection algorithm and we will begin with the bees dataset. Some of the images in the dataset, that we will be working on, is licensed under the CC0 license and are provided on this website. &lt;/p&gt;

&lt;p&gt;Object detection is a &lt;strong&gt;supervised learning algorithm&lt;/strong&gt; in that it expects inputs as images and annotations. Now, this dataset contains images of bees, however, we need to provide annotations to these images in the form of boundary boxes. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Annotated image Example:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--BzGbA1wK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/um8r2vc9vrnq9vektlk9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--BzGbA1wK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/um8r2vc9vrnq9vektlk9.png" alt="Image description" width="872" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Private labeling workforce:&lt;/strong&gt;&lt;br&gt;
We will take an example of how to use a private labeling workforce to label your dataset.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Search for Amazon SageMaker in the search bar.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aBuC4VSV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f3k4a5dqa6fnp0iw4l8b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aBuC4VSV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f3k4a5dqa6fnp0iw4l8b.png" alt="Image description" width="880" height="510"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Choose Labelling jobs from Amazon Sagemaker’s Ground Truth, press &lt;strong&gt;Create labelling jobs&lt;/strong&gt;.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--dpXhEko_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z1min4br6c4o9ngy2wi6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--dpXhEko_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z1min4br6c4o9ngy2wi6.png" alt="Image description" width="880" height="427"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; You need to &lt;strong&gt;create a new S3 bucket&lt;/strong&gt; and call it for example “bees-dataset-bucket” which is the name we are going to use.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Name the labelling job, use either of Automated data setup (if you have the images dataset in a folder, Sagemaker makes the input manifest file for you) or manual data setup option (provide the input manifest file). We are going to use the &lt;strong&gt;Automated data setup&lt;/strong&gt; option and our data resides in bees-dataset-bucket/input_dataset/ . Specify the &lt;strong&gt;output location&lt;/strong&gt; and choose data type “Image”.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TOvQr3Sb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/brssxtgm2vhpo89lnnpq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TOvQr3Sb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/brssxtgm2vhpo89lnnpq.png" alt="Image description" width="880" height="701"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;Example of the input manifest file created automatically by Sagemaker:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2j6TdsIg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/boe8ie69rh2fqztj3knp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2j6TdsIg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/boe8ie69rh2fqztj3knp.png" alt="Image description" width="811" height="135"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; We will create a new IAM role which will provide the labelling job access to our “bees-dataset-bucket”.&lt;br&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2K_Mo32m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/skj28mo6sk832irwqz7a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2K_Mo32m--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/skj28mo6sk832irwqz7a.png" alt="Image description" width="880" height="706"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 6:&lt;/strong&gt; Press Complete data setup to create the input manifest file from the dataset. For illustration, we will choose &lt;strong&gt;Random sample&lt;/strong&gt; to randomly sample 1% of the data (500 image) so we will only have (5) images to sample. &lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qOrd3x8i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x39i0bgvh7qau4vgo7ut.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qOrd3x8i--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x39i0bgvh7qau4vgo7ut.png" alt="Image description" width="880" height="961"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 7:&lt;/strong&gt; Choose &lt;strong&gt;bounding box&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2RtpZkeb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xish0opocxgat59gf0dc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2RtpZkeb--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xish0opocxgat59gf0dc.png" alt="Image description" width="880" height="921"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt; Choose &lt;strong&gt;Private&lt;/strong&gt; under Worker types, fill in team name, employees’ emails, organization name, contact detail.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cfjWvN2q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0q7y3lfajgz67nzkhvad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cfjWvN2q--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0q7y3lfajgz67nzkhvad.png" alt="Image description" width="880" height="846"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Cb-DSOWD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/064pqfpi17tsdmxhq625.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Cb-DSOWD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/064pqfpi17tsdmxhq625.png" alt="Image description" width="880" height="924"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 9:&lt;/strong&gt; You need to fully discuss the labelling job and add a good and a bad example for clarification, click Create.&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--XLuOie7w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zqzla3ukhgwaz07qcars.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--XLuOie7w--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zqzla3ukhgwaz07qcars.png" alt="Image description" width="880" height="784"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 10:&lt;/strong&gt; Your employers will then receive this email:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TsaGMZsD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oy15yczl5lllopnxtimt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TsaGMZsD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oy15yczl5lllopnxtimt.png" alt="Image description" width="880" height="461"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step 11:&lt;/strong&gt; Each employer now will perform this task:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7A_mzwrE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ufotqtijv2cgquo2k0jp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7A_mzwrE--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ufotqtijv2cgquo2k0jp.png" alt="Image description" width="800" height="453"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Resources:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Sagemaker ground truth: [(&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management-public.html)"&gt;https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management-public.html)&lt;/a&gt;]&lt;br&gt;
[(&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management-vendor.html)"&gt;https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-management-vendor.html)&lt;/a&gt;]&lt;br&gt;
[(&lt;a href="https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-private.html)"&gt;https://docs.aws.amazon.com/sagemaker/latest/dg/sms-workforce-private.html)&lt;/a&gt;]&lt;br&gt;
[(&lt;a href="https://aws.amazon.com/blogs/machine-learning/create-high-quality-instructions-for-amazon-sagemaker-ground-truth-labeling-jobs/)"&gt;https://aws.amazon.com/blogs/machine-learning/create-high-quality-instructions-for-amazon-sagemaker-ground-truth-labeling-jobs/)&lt;/a&gt;]&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Amazon SageMaker GroundTruth</title>
      <dc:creator>Karim Elkobrossy</dc:creator>
      <pubDate>Wed, 20 Apr 2022 22:04:56 +0000</pubDate>
      <link>https://dev.to/aws-builders/amazon-sagemaker-groundtruth-ff7</link>
      <guid>https://dev.to/aws-builders/amazon-sagemaker-groundtruth-ff7</guid>
      <description>&lt;p&gt;We will review possible methods provided by &lt;strong&gt;Amazon GroundTruth&lt;/strong&gt; to label our data. &lt;strong&gt;Amazon GroundTruth&lt;/strong&gt; is a service within Amazon Sagemaker that labels datasets for further use in building machine learning models. Three options are available when using this service: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Mechanical Turk&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Private labelling workforce&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Vendor&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Mechanical Turk workforce&lt;/strong&gt; is a team of global, on-demand workers from Amazon that work around the clock on labelling and human review tasks. Your data should be free of any personally identifiable information (PII) as this is a public workforce. You should use this workforce if you want to save time on the labelling work which anyone could do and if there are no PII within your data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Private labelling workforce&lt;/strong&gt; is a team of workers which you choose. They could be employees of your company or a group of subject matter experts. For example, if you have a dataset containing X-ray images and you want to classify those images whether they contain a certain disease or not. Another situation is when your data contains PII, and you want a private workforce to label them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vendor workforce&lt;/strong&gt; is a selection of experienced vendors who specialize in providing data labelling services. They could be found at the AWS marketplace.&lt;/p&gt;

&lt;p&gt;Let us now take a look at the different types of labelling jobs available for the &lt;strong&gt;image&lt;/strong&gt; data type:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Images&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Image classification (Single label)
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F7D77OGp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/svfdfs8y2uk27akyobyp.png" alt="Image description" width="311" height="207"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this task, the employees are categorising images into individual classes (1 class per image).&lt;br&gt;
In this example we are either choosing Basketball &lt;strong&gt;OR&lt;/strong&gt; Soccer as a label for this image.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Image classification (Multi-label)
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--etE44iGp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pxrdknnldiy619fuk5ns.png" alt="Image description" width="880" height="592"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this task, the employees are categorising images into one or more classes.&lt;br&gt;
In this example we are choosing &lt;strong&gt;ALL&lt;/strong&gt; labels present within the image.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Bounding box
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ujcHqqJQ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbyq3dfg7paqyr78i4jq.png" alt="Image description" width="311" height="207"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this task, the employees should draw bounding boxes around specified objects in the images.&lt;br&gt;
In this example we want to specify the location of the birds within the image by drawing bounding boxes which surrounds them.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Semantic segmentation
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vvXLIwXz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lmo8pjuycofsq4h3pac8.png" alt="Image description" width="839" height="272"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this task, the employees should draw pixel level labels around specific objects and segments in the image.&lt;br&gt;
In this example we are classifying &lt;strong&gt;EACH PIXEL&lt;/strong&gt; within the image. So you can see that the pixels of the plane are coloured in red and the rest are in black.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Label verification
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--PPCr1H_r--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k53hzod8penvujycckzu.png" alt="Image description" width="311" height="207"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In this task, the employees should verify existing labels in the dataset. This could be used to check prior work by human workers or automated labeling jobs.&lt;br&gt;
In this example we want to verify the car's label as being correct or incorrect.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Computer Vision Examples</title>
      <dc:creator>Karim Elkobrossy</dc:creator>
      <pubDate>Wed, 20 Apr 2022 02:49:06 +0000</pubDate>
      <link>https://dev.to/aws-builders/computer-vision-examples-fid</link>
      <guid>https://dev.to/aws-builders/computer-vision-examples-fid</guid>
      <description>&lt;p&gt;&lt;strong&gt;What is Computer Vision?&lt;/strong&gt;&lt;br&gt;
“Artificial Intelligence” is the great umbrella carrying underneath “Machine Learning” and “Deep Learning”. Computer vision is a subset of machine learning aiming to understand features in an image along with deriving useful insights. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Comparison&lt;/strong&gt;&lt;br&gt;
This is where most people get confused especially when computer vision technologies are introduced such as object detection, image classification and semantic segmentation. &lt;strong&gt;Image classification&lt;/strong&gt; is used when we would like to label the whole image by a single class or multi classes. So, we could have classes such as “dog”, “cat” and “mouse” and the goal is to observe an image and classify it as containing a dog or a cat or a mouse or a mixture of those (multi classes per image). &lt;strong&gt;Object detection&lt;/strong&gt; is slightly more difficult than image classification in which it should it analyze an image, predict all the different classes present in an image with a confidence score for each class and draw a boundary box around the classes. On the other hand, &lt;strong&gt;semantic segmentation&lt;/strong&gt; provides a much deeper view of the image than object detection. It classifies each pixel in the image to the corresponding class rather than just framing the object, so you get to know more about the dimensions of the object.&lt;/p&gt;

&lt;p&gt;For clearly distinguishing between the three algorithms, follow the flowchart below:&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G9RJ3qKv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ukhm84rgkc3z7ixkvah.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G9RJ3qKv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ukhm84rgkc3z7ixkvah.png" alt="Image description" width="784" height="732"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
