<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tina</title>
    <description>The latest articles on DEV Community by Tina (@tinazhouhui).</description>
    <link>https://dev.to/tinazhouhui</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tinazhouhui"/>
    <language>en</language>
    <item>
      <title>How to set up a custom email domain through SES and SNS using (mostly) CloudFormation</title>
      <dc:creator>Tina</dc:creator>
      <pubDate>Wed, 11 Jan 2023 19:18:30 +0000</pubDate>
      <link>https://dev.to/tinazhouhui/how-to-set-up-a-custom-email-domain-through-ses-and-sns-using-mostly-cloudformation-4ol1</link>
      <guid>https://dev.to/tinazhouhui/how-to-set-up-a-custom-email-domain-through-ses-and-sns-using-mostly-cloudformation-4ol1</guid>
      <description>&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have an AWS account&lt;/li&gt;
&lt;li&gt;Have a domain&lt;/li&gt;
&lt;li&gt;Some familiarity with CloudFormation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For one of my pet projects, &lt;a href="//www.startbite.me"&gt;BiteMe (an online multiplayer snake game)&lt;/a&gt;, I wanted to have a custom email domain to receive emails without the need to set up one through paid service (I checked, you have to pay &lt;a href="https://workspace.google.com/intl/en/pricing.html?v&amp;amp;gclid=CjwKCAiAqt-dBhBcEiwATw-ggOK4kcHj0hSrU2uSZ4corfGWbx6txcgiEFmUziaMZpBw96sTpq-KjBoCKuIQAvD_BwE&amp;amp;gclsrc=aw.ds" rel="noopener noreferrer"&gt;6$/month&lt;/a&gt; through google business). Considering that this is more than my whole AWS bill, I knew there must be another option. &lt;/p&gt;

&lt;p&gt;This article will show you how to set up a POC using AWS Simple Email Service and AWS Simple Notification Service using CloudFormation (apart from two manual steps), which should get you a custom email domain that is virtually free. And you get to learn how to use SES and SNS. &lt;/p&gt;

&lt;h3&gt;
  
  
  Some terminology
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Simple Email Service (SES)&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/ses/" rel="noopener noreferrer"&gt;SES&lt;/a&gt; is mostly known for sending emails but for 3 regions (us-west-2, us-east-1, eu-west-1) it can also receive emails. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhotg3ifplwns06aaa91m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhotg3ifplwns06aaa91m.png" alt="SES email receiving regions"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simple Notification Service (SNS)&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/sns/" rel="noopener noreferrer"&gt;SNS&lt;/a&gt; is a service that can send notifications either to other applications or to end user through email, sms or a push notification.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CloudFormation (CF)&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/cloudformation/" rel="noopener noreferrer"&gt;CloudFormation &lt;/a&gt; is an IaaC service provided by AWS. It allows for resource provisioning through a yaml or json file. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tip💡: Use the CloudFormation service in AWS console to check your deployment status. Deploy to AWS step by step to isolate errors. Always check that the resource has been created in the service.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The steps are:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create an email identity&lt;/li&gt;
&lt;li&gt;Verify that the domain we want to use is indeed ours&lt;/li&gt;
&lt;li&gt;Set up an MX record to receive emails &lt;/li&gt;
&lt;li&gt;Create an SNS Topic&lt;/li&gt;
&lt;li&gt;Create a rule that would say what to do with the email once its received&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To see how to do it in a console, &lt;a href="https://docs.aws.amazon.com/ses/latest/dg/receiving-email.html" rel="noopener noreferrer"&gt;follow this link&lt;/a&gt;. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Yes, using the console might be easier, but you loose so many benefits - versioning, the possibility of code reviews, integrating it into your CICD, share it with someone, ...  &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  1. Create an email Identity (SES)
&lt;/h3&gt;

&lt;p&gt;First of all, we need to create the Email Identity, which is basically the @mydomain.com:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;EmailSES&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::SES::EmailIdentity&lt;/span&gt;
 &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;EmailIdentity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;yourdomain.com'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once deployed, check that an email identity was created in SES.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Verify the domain (DNS provider, Route53)
&lt;/h3&gt;

&lt;p&gt;In order to verify the domain, we need to add DNS records to our DNS provider. If your DNS provider is Route53, you can use the &lt;code&gt;Fn::GetAtt&lt;/code&gt; to get the three records and put them directly to your Route53:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Note that the &lt;code&gt;EmailSES&lt;/code&gt; is how we named our Email Identity in the previous step.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;WebsiteDNS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Route53::RecordSetGroup&lt;/span&gt;
 &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;HostedZoneId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;YOURHOSTEDZONEID&lt;/span&gt;
   &lt;span class="na"&gt;RecordSets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;Fn::GetAtt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;EmailSES&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;DkimDNSTokenName1&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
       &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CNAME&lt;/span&gt;
       &lt;span class="na"&gt;TTL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;600&lt;/span&gt;
       &lt;span class="na"&gt;ResourceRecords&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Fn::GetAtt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;EmailSES&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;DkimDNSTokenValue1&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;Fn::GetAtt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;EmailSES&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;DkimDNSTokenName2&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
       &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CNAME&lt;/span&gt;
       &lt;span class="na"&gt;TTL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;600&lt;/span&gt;
       &lt;span class="na"&gt;ResourceRecords&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Fn::GetAtt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;EmailSES&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;DkimDNSTokenValue2&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;Fn::GetAtt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;EmailSES&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;DkimDNSTokenName3&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;
       &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CNAME&lt;/span&gt;
       &lt;span class="na"&gt;TTL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;600&lt;/span&gt;
       &lt;span class="na"&gt;ResourceRecords&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Fn::GetAtt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt; &lt;span class="nv"&gt;EmailSES&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;DkimDNSTokenValue3&lt;/span&gt; &lt;span class="pi"&gt;]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once deployed, check that 3 CNAME records were created in Route53.&lt;/p&gt;

&lt;p&gt;If your provider is not Route53, you can copy the records from the created Email Identity and paste it to your provider.&lt;/p&gt;

&lt;p&gt;Wait until the Identity status turns to Verified (you should receive an email).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahe6c7tin4j9chpljftn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahe6c7tin4j9chpljftn.png" alt="verified email identities in SES"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Set up MX record (DNS Provider, Route53)
&lt;/h3&gt;

&lt;p&gt;This is to set up the email receiving endpoint with your DNS provider. As we use Route53, we can just add another record after those 3 that we created in step two. Note that the endpoint will be dependent on the region that you are in, &lt;a href="https://docs.aws.amazon.com/general/latest/gr/ses.html" rel="noopener noreferrer"&gt;see docs for more info&lt;/a&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yourdomain.com&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MX&lt;/span&gt;
  &lt;span class="na"&gt;TTL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;600&lt;/span&gt;
  &lt;span class="na"&gt;ResourceRecords&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;10&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;inbound-smtp.{REGION}.amazonaws.com'&lt;/span&gt;  &lt;span class="c1"&gt;# this will depend on your region&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once deployed, check that the MX Record was created in Route53 / your DNS provider.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Create an SNS Topic (SNS)
&lt;/h3&gt;

&lt;p&gt;So, now SES knows that the domain is yours and you can receive emails. However, we also need to tell it what should happen to the received emails. SES can trigger these actions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Save email to an S3 bucket&lt;/li&gt;
&lt;li&gt;Invoke a lambda function&lt;/li&gt;
&lt;li&gt;Publish to an SNS topic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We will want to publish to SNS to send us an email when such an event happens, so we need to create one. We also have to create a &lt;code&gt;subscription&lt;/code&gt; (in this case an email address) through which we will receive the notification:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;EmailSNS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::SNS::Topic&lt;/span&gt;
 &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;Subscription&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Endpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;youremail@gmail.com'&lt;/span&gt; &lt;span class="c1"&gt;# the email to receive the notifications&lt;/span&gt;
       &lt;span class="na"&gt;Protocol&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;email&lt;/span&gt;
   &lt;span class="na"&gt;TopicName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;my-sns-topic-name'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once deployed, you can check that the SNS has been created. You will also see that the status of you subscription (SNS -&amp;gt; created topic -&amp;gt; subscriptions) is pending confirmation. Check the email address that you have provided as an endpoint to confirm subscription. ❗️Check the spam folder❗️&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbxbsg7mfnxzyxjtts7v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbxbsg7mfnxzyxjtts7v.png" alt="SNS pending subscription"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you want to use an SNS topic in a different AWS account, add a step that will give SES the permission to publish it there, see the &lt;a href="https://docs.aws.amazon.com/ses/latest/dg/receiving-email-permissions.html#receiving-email-permissions-kms" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. Since the SNS that will be processing these events is part of the same account, we do not need to create this permission.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dk8isn6zj74mcx7x6m4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dk8isn6zj74mcx7x6m4.png" alt="SNS topics"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Create a RuleSet with a Rule (SES)
&lt;/h3&gt;

&lt;p&gt;To tell SES to publish to an SNS topic, we need to create a Rule. However, a rule has to be part of a RuleSet, therefore we need to create that first:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;EmailRuleSet&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::SES::ReceiptRuleSet&lt;/span&gt;
 &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;RuleSetName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;my-ruleset-name'&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once deployed you can check that the rule set was created in SES -&amp;gt; Email Receiving -&amp;gt; All rule sets. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgf1lnxqggcer7vykw72y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgf1lnxqggcer7vykw72y.png" alt="SES Rule set"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will notice that the status is inactive. Unfortunately, the &lt;code&gt;ReceiptRuleSet&lt;/code&gt; type does not have a property to set the RuleSet active (&lt;a href="https://github.com/aws/aws-cdk/issues/10321" rel="noopener noreferrer"&gt;see the issue here&lt;/a&gt;) so we need to manually do so. It is also important to note that there can only be &lt;strong&gt;one active rule set per region&lt;/strong&gt;. So, if you were setting a staging and production rule, they would both have to be in this rule set. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv1oycdiuv5rg0ynqsb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv1oycdiuv5rg0ynqsb7.png" alt="Confirm active rule set"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Side note, deleting an active ruleset from CloudFormation will throw an error, better to delete from CloudFormation as well as manually.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Finally, we need to create the rule that will take all the emails and publish them into the created SNS. The rule can be defined with various granularity, here in the example, we are only taking emails that come from the email address &lt;a href="mailto:info@yourdomain.com"&gt;info@yourdomain.com&lt;/a&gt;. To set a different granularity, check the &lt;a href="https://docs.aws.amazon.com/ses/latest/dg/receiving-email-receipt-rules-console-walkthrough.html" rel="noopener noreferrer"&gt;documentation here&lt;/a&gt;:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;

&lt;span class="na"&gt;EmailRule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
 &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::SES::ReceiptRule&lt;/span&gt;
 &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;Rule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;Actions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;SNSAction&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="na"&gt;TopicArn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
             &lt;span class="na"&gt;Ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EmailSNS&lt;/span&gt;  &lt;span class="c1"&gt;# name of your SNS resource&lt;/span&gt;
     &lt;span class="na"&gt;Enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
     &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;your-rule-name'&lt;/span&gt;
     &lt;span class="na"&gt;ScanEnabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
     &lt;span class="na"&gt;Recipients&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;info@yourdomain.com'&lt;/span&gt;
   &lt;span class="na"&gt;RuleSetName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;Ref&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;EmailRuleSet&lt;/span&gt;  &lt;span class="c1"&gt;# name of your rule set resource&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once deployed you can check that the rule has been created in SES -&amp;gt; Email Receiving -&amp;gt; RuleSet. The status should be Enabled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wnt9cjw4bzaswfw5gmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wnt9cjw4bzaswfw5gmr.png" alt="SES rules"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Final checks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Domain is verified (CNAME records are present)&lt;/li&gt;
&lt;li&gt;Subscription is confirmed (check spam folder of provided email endpoint)&lt;/li&gt;
&lt;li&gt;RuleSet is Active (manually activate)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  And that’s it!
&lt;/h2&gt;

&lt;p&gt;Sending a testing email will look something like this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8zqpnssork0h78oo25x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8zqpnssork0h78oo25x.png" alt="final email"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Unfortunately, these emails cannot be &lt;a href="https://docs.aws.amazon.com/sns/latest/dg/sns-email-notifications.html" rel="noopener noreferrer"&gt;customized&lt;/a&gt; as the email delivery feature is intended to provide internal system alerts, not marketing messages. But as a proof of concept without any backend logic, it's not bad 🙂&lt;/p&gt;




&lt;h2&gt;
  
  
  Where next?
&lt;/h2&gt;

&lt;p&gt;Since SES can invoke a lambda, you can create a lambda function to process the received emails and then use SES to send a proper email with the received content. That way, you won’t need the SNS at all.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Some useful links:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/ses/latest/dg/receiving-email.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/ses/latest/dg/receiving-email.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/sns/latest/dg/sns-create-topic.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/sns/latest/dg/sns-create-topic.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ses-emailidentity.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ses-emailidentity.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/sns/latest/dg/sns-email-notifications.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/sns/latest/dg/sns-email-notifications.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/sns-topic-email-notifications/" rel="noopener noreferrer"&gt;https://aws.amazon.com/premiumsupport/knowledge-center/sns-topic-email-notifications/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>email</category>
      <category>tutorial</category>
      <category>programming</category>
    </item>
    <item>
      <title>Introduction to Object-relational mapping: the what, why, when and how of ORM</title>
      <dc:creator>Tina</dc:creator>
      <pubDate>Thu, 19 Nov 2020 21:44:56 +0000</pubDate>
      <link>https://dev.to/tinazhouhui/introduction-to-object-relational-mapping-the-what-why-when-and-how-of-orm-nb2</link>
      <guid>https://dev.to/tinazhouhui/introduction-to-object-relational-mapping-the-what-why-when-and-how-of-orm-nb2</guid>
      <description>&lt;p&gt;If you have ever used a relational database for persisting your data and an object-oriented programming language for your application, then Object-relational mapping paradigm is definitely something you should be familiar with. If not then read on as this article hopes to provide an introduction to the concept of ORM by answering the basic questions like why is there a need for an ORM, what an ORM is, the benefits and drawbacks of ORM, first steps needed to set up an ORM and the patterns of setting it up. &lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the need for ORM
&lt;/h2&gt;

&lt;p&gt;Object-oriented programming languages (OOP) are great in combining variables and functions into classes and objects. Imagine this as objects in the real world (cars) each with their properties (wheels) and behaviours (drive). On the other hand, relational databases (also called Relational Database Management System or RDBMS) are powerful in their relationships between individual tables through the use of foreign keys. Imagine this as relations in the real world between one entity (passengers) and another (plane seats) linked together by a unique identifier (plane ticket ID).  Today, the majority of applications are written in an OOP language and at the same time persist their data in the relational databases. Consequently, a need for a more harmonic way to communicate between them arose.&lt;/p&gt;

&lt;p&gt;The clash between objects and relations is a highly complex problem as these two represent fundamentally different paradigms. The differences vary from the basic data structures, through differences in manipulations and transactions to conceptual differences. A more umbrella term that covers both specific and philosophical contrasts between them is &lt;strong&gt;&lt;a href="https://hibernate.org/orm/what-is-an-orm/"&gt;Object-relational impedance mismatch&lt;/a&gt;&lt;/strong&gt; (also Paradigm Mismatch). &lt;/p&gt;

&lt;p&gt;It is important to state that the &lt;strong&gt;difference between the object-oriented and relational concept is intentional&lt;/strong&gt; as they are optimised to do what they are best at. The object-oriented programming language is powerful in its ability to describe the real world using objects as the fundamental data type and the &lt;a href="https://www.freecodecamp.org/news/object-oriented-programming-concepts-21bb035f7260/"&gt;four principles of OOP&lt;/a&gt; - encapsulation,  abstraction, inheritance and polymorphism. On the other hand, relational databases are exceptional in persisting data and thanks to the nature of the relationship between entities (most commonly represented by tables) through primary and foreign keys, they ensure data integrity through &lt;a href="https://database.guide/what-is-referential-integrity/"&gt;referential integrity&lt;/a&gt;. They also use Structured Query Language (SQL) designed for fast data retrieving and manipulation. Since both concepts are widely used, especially together, Object Relational Mapping tools can blur the line between the OOP language and RDBMS a little bit and allow your application to make the best of both sides.&lt;/p&gt;

&lt;h2&gt;
  
  
  Once upon a time, there was Object Relational Mapping (ORM)...
&lt;/h2&gt;

&lt;p&gt;ORM is a concept that lets you manipulate data from a database through an object-oriented paradigm. In other words, &lt;strong&gt;it allows you to call and manipulate data from the database using your language of choice instead of SQL&lt;/strong&gt;. The process of converting data into objects is called &lt;a href="https://ocramius.github.io/blog/doctrine-orm-optimization-hydration/"&gt;hydration&lt;/a&gt; and usually involves converting column values into object properties. That is why ORM libraries are language-specific (&lt;a href="https://en.wikipedia.org/wiki/List_of_object-relational_mapping_software"&gt;here is a list&lt;/a&gt; of ORM libraries). That is only the basic concept, the ORM libraries are much more powerful, especially when your application and database get more complex. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fiaqbt7xhkmiol22fpnot.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fiaqbt7xhkmiol22fpnot.jpg" alt="ORM" width="800" height="876"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Here are the benefits that ORM can bring...
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Greatly supplements raw SQL as the library encapsulates SQL queries into simpler methods, allowing you to interact with the object directly. This is highly beneficial as it saves time from repetitive SQL queries. Moreover, it also means that you do not have to know SQL too well but keep in mind that understanding how a relational database works will allow you to understand the magic that is happening under the hood. &lt;/li&gt;
&lt;li&gt;Interacts with your database in your favourite OOP language.&lt;/li&gt;
&lt;li&gt;Allows for the use of your database of choice without the need to worry about the different SQL dialects.&lt;/li&gt;
&lt;li&gt;Built-in features that could greatly save your time (for example optimistic and pessimistic locks).&lt;/li&gt;
&lt;li&gt;Improves the maintainability of your code by having a clear data structure overview in classes and objects and enables check for data types.&lt;/li&gt;
&lt;li&gt;Using SQLite in development but MySQL in production? No problem, the link between database and application is loose so changes on either side are easier to implement. 
&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxny1q1cg3jewk7ng648g.jpg" alt="benefits_and_drawbacks_of_ORM" width="800" height="638"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ...and here are some drawbacks
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;ORMs take a bit of time to get used to, these libraries are definitely on the more complex end so getting the hang of them might take a bit of time. Consequently setting them up correctly to take advantage of their full potential can also be time-consuming.&lt;/li&gt;
&lt;li&gt;There is a lot of magic happening under the hood which makes the understanding of what is happening difficult. To not lose control, it is good to double-check the documentation and do the research to make sure it’s doing what you want.
*Without deeper understanding and correctly calling the methods, the SQL queries can be more performance heavy than writing SQL directly, especially when more data leads to a larger number of SQL queries (the so-called &lt;a href="https://stackoverflow.com/questions/97197/what-is-the-n1-selects-problem-in-orm-object-relational-mapping"&gt;N+1 problem&lt;/a&gt;). &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That being said, &lt;strong&gt;you do not have to use ORM for the OOP and RDBMS to coexist&lt;/strong&gt;. There is a heated debate between those for and against ORM (I won’t go into the details here, but &lt;a href="https://medium.com/@mithunsasidharan/should-i-or-should-i-not-use-orm-4c3742a639ce"&gt;here&lt;/a&gt; is a cool article looking at the dispute through values or an &lt;a href="https://martinfowler.com/bliki/OrmHate.html"&gt;article&lt;/a&gt; on why ORMs are so hated by Martin Fowler). You need to decide for yourself, whether using ORM will improve your application or instead bring unnecessary complexity. One way to avoid the problem altogether is to just use a &lt;a href="https://www.pluralsight.com/blog/software-development/relational-vs-non-relational-databases"&gt;non-relational database&lt;/a&gt; and/or switch to &lt;a href="https://medium.com/@shaistha24/functional-programming-vs-object-oriented-programming-oop-which-is-better-82172e53a526#:~:text=Both%20Functional%20programming%20and%20object,data%20is%20stored%20in%20objects."&gt;functional programming language&lt;/a&gt;...&lt;/p&gt;

&lt;h2&gt;
  
  
  So how do I set up ORM? Easy as 1, 2, 3...
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;The first step is to select and install a library that implements the Object-relational mapping paradigm based on the OOP language that you are using&lt;/li&gt;
&lt;li&gt;The second step is to create a connection (in ORM terms a session) between the ORM and your database. This step can be found in the documentation of the ORM library. &lt;a href="https://docs.sqlalchemy.org/en/13/orm/session_basics.html"&gt;Here&lt;/a&gt; is an example of this step documented for SQLAlchemy, which is an ORM library for Python. &lt;/li&gt;
&lt;li&gt;The third step is to set up the mapping itself. What I mean by that is creating the entity classes that are then linked to your relational tables and connecting them to each other. This process should again be nicely documented in the ORM library’s documentation. Here is an example of SQLAlchemy &lt;a href="https://docs.sqlalchemy.org/en/13/orm/tutorial.html"&gt;documentation&lt;/a&gt; to inspire you. Unsurprisingly, there are few ways on how to approach the mapping itself, these are called &lt;strong&gt;ORM Patterns&lt;/strong&gt;. The two most common ones are active record and data mapper.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  ORM Patterns: Active Record vs Data Mapper
&lt;/h3&gt;

&lt;p&gt;The ORM Patterns can be viewed as philosophies on how to map data between tables and objects. Think of it as how the ORM layer between the table and objects is behaving. &lt;/p&gt;

&lt;p&gt;With Active Record, a table is represented by a class (usually called entities) where properties of the class more or less correspond directly with the columns of the table. Therefore, an object instance is tied to a single row in the table. The biggest difference from the Data Mapper paradigm is that &lt;em&gt;in addition to the data the entities also contain methods that operate on them&lt;/em&gt; (save, delete, insert...), allowing for much closer binding between data and objects. The major benefit of an active record is its simplicity and quick set up as what you see in one is likely represented in the other. However, because of the tighter coupling between data and methods, the object contradicts the single responsibility principle (&lt;a href="https://sites.google.com/site/unclebobconsultingllc/active-record-vs-objects"&gt;here&lt;/a&gt; is a nice summary by Uncle Bob). Moreover, testing these bound objects is difficult and with increasing complexity, changes in one side can have an unwanted impact on the other. That is why Active Record is most suitable for &lt;a href="https://www.codecademy.com/articles/what-is-crud"&gt;CRUD applications&lt;/a&gt;. ORM libraries that support Active Record are Ruby on Rails (Ruby), Laravel’s Eloquent (PHP), Symphony’s Propel (PHP), Django’s ORM (Python).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F18480kb51j5vbf1qhma4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F18480kb51j5vbf1qhma4.jpg" alt="activerRecord_vs_dataMapper" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As for Data Mapper, the set up is similar to the active record, however, &lt;em&gt;the objects do not contain data manipulation methods&lt;/em&gt; (we cannot call a save method on the object to persist the data). Instead, the objects access the Data Mapper layer that transfers the information to the persistent database and vice versa (in Java’s Hibernate this is called the &lt;a href="https://docs.jboss.org/hibernate/entitymanager/3.6/reference/en/html_single/#d0e61"&gt;Entity Manager&lt;/a&gt;). Thanks to this separation, the objects do not need to know how the data are saved into the database and they do not need to inherit the ORM methods thus following the single responsibility principle. Resulting from this detachment is also a much stricter process of interaction with the database which allows for more formal process and stricter control over database access. ORM libraries that support Data Mapper are Hibernate (Java), Doctrine 2 (PHP), SQL Alchemy (Python), Entity Framework (MS .NET), Prisma (Golang). &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;While Active Record is trying to blur the line between them as much as possible and creates a direct link, the Data Mapper creates a true middle layer that isolates the persistent database from the app’s business logic.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  ...and the OOP and RDBMS lived happily ever after
&lt;/h2&gt;

&lt;p&gt;At the end of the day, ORM is just a tool. A powerful, magical tool that can bridge the differences between two very different worlds and allow you, the user, to interact with relations as if they were objects in the language of your choice. However, just with any tool, it requires practice and understanding, as relying only on the magic can lead to dangerous performance issues. Whether you decide to implement ORM or not, keep in mind that the decision is entirely up to you, so make sure that you understand what your application needs, especially in the future, to keep OOP, RDBMS and you happy ever after. The end.&lt;/p&gt;

</description>
      <category>database</category>
      <category>objectrelationalmapping</category>
      <category>orm</category>
      <category>oop</category>
    </item>
    <item>
      <title>Discovering OpenCV with Python: Coin Amount Calculation</title>
      <dc:creator>Tina</dc:creator>
      <pubDate>Mon, 10 Aug 2020 22:12:01 +0000</pubDate>
      <link>https://dev.to/tinazhouhui/coin-amount-calculation-discovering-opencv-with-python-52gn</link>
      <guid>https://dev.to/tinazhouhui/coin-amount-calculation-discovering-opencv-with-python-52gn</guid>
      <description>&lt;p&gt;Since I already explored coin detection, I decided to take the real-life application of OpenCV one step further. Now that I can find the coins, naturally, the next step would be to correctly identify the coins and subsequently calculate their amount. &lt;/p&gt;

&lt;h2&gt;
  
  
  Detect coins
&lt;/h2&gt;

&lt;p&gt;As previously explored in a separate article &lt;a href="https://dev.to/tinazhouhui/coin-detection-discovering-opencv-with-python-1ka1"&gt;here&lt;/a&gt;, detecting the coins is the first step. I used the Hough Circle Transformation to find these and therefore had the radius of each coin and the coordinates of the center. The logic behind the value identification lies in the radii of each coin as we have only visual information, therefore the precision of the circles drawn around the coins needed to be high.&lt;/p&gt;

&lt;h2&gt;
  
  
  Identify coins
&lt;/h2&gt;

&lt;p&gt;Since each picture can be taken from a different height, we cannot directly translate the number of pixels to millimetres. Therefore the identification of coins had to be relative based on their radii. &lt;/p&gt;

&lt;p&gt;For example, The smallest coin (1 CZK) has a radius of 20 mm and the second to smallest coin (2 CZK) has a radius of 21.5 mm. Therefore, the 2 CZK coin’s radius is 1.075 times larger than the radius of the 1 CZK coin. Amount of pixels representing the radii in the picture must, therefore, also follow the same ratio. This logic works with any coin, but you have to let the program know, which coin is the base coin that you derive the ratios from. In my case, it was easiest to say that the smallest coin on the picture represents the smallest coin in real life so each analysed image had to have at least one 1 CZK coin. &lt;/p&gt;

&lt;p&gt;From there, I just created a dictionary of all important information related to each coin - name, value, count, radius, ratio-to-smallest-coin and ran a for cycle for each of the coins, or to be precise, circles found by the Hough circle Transformation.&lt;/p&gt;

&lt;p&gt;Now, all that was left was that each time a coin is identified, the value is written to the center of the coin and added to the total value variable. After running through all the coins, the total amount is calculated. As usual, all work can be &lt;a href="https://github.com/tinazhouhui/computer_vision/blob/master/image_analysis/coin_amount_calculate.py" rel="noopener noreferrer"&gt;found here, on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let’s test it!
&lt;/h2&gt;

&lt;p&gt;First I tested the program on the most nicely scanned coins that the internet could offer and found this picture. After some tweaking with the parameters in Hough Transformation, I got this result:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnkjkwlgqnst8uxem38f4.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fnkjkwlgqnst8uxem38f4.jpg" alt="coint_amount"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

The total amount is 88 CZK
1 CZK = 1x
2 CZK = 1x
5 CZK = 1x
10 CZK = 1x
20 CZK = 1x
50 CZK = 1x


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This gave me the confidence to try to test it on a picture of coins that I took at home. I thought, just as with the picture above, that few tries with the parameters would result in the correct amount. &lt;/p&gt;

&lt;p&gt;Unfortunately, that was not the case. See, the biggest problem was that Hough could not detect the circles well enough to fit the ratio. It either made the circle too small and hence thought it was smaller value or the opposite or did not detect them at all. I tried changing the background from white to black, different distance to take the picture, different lighting and still nothing.  It took me days of trying almost every combination of parameters to realise that this was not the way. The secret lies in preprocessing the image. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The most important realization was that the output could be only as good as the input data that we are providing. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Logically, by increasing the quality of input data, the quality (here it would be precision of circles drawn) would also increase. Consequently, making sure that the picture quality was high enough and &lt;a href="https://dev.to/tinazhouhui/discovering-opencv-with-python-gamma-correction-3cnh"&gt;stretching the gamma&lt;/a&gt; to bring out the contrast helped Hough to better detect the edges. To some degree, the previous improvements like black background helped as well. Then suddenly, few tweaks later, success!&lt;/p&gt;

&lt;p&gt;Here is the final output:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv80lk9gv1m8iguvcrxuh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fv80lk9gv1m8iguvcrxuh.jpg" alt="coin amount"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

The total amount is 151 CZK
1 CZK = 4x
2 CZK = 1x
5 CZK = 5x
10 CZK = 2x
20 CZK = 5x
50 CZK = 0x


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This project again represents the accumulation of previous knowledge but is more valuable for me as it has a tangible real-life application that I could test at home. The biggest lesson learnt was to think about why the program is not performing well (bad preprocessing of image) rather than just trying to change various parameters of the Hough Circle Transformation and hope for the best. As always, may the Python be with you.&lt;/p&gt;

</description>
      <category>opencv</category>
      <category>python</category>
      <category>imageanalysis</category>
      <category>computervision</category>
    </item>
    <item>
      <title>Discovering OpenCV with Python: Coin Detection</title>
      <dc:creator>Tina</dc:creator>
      <pubDate>Sat, 01 Aug 2020 13:42:36 +0000</pubDate>
      <link>https://dev.to/tinazhouhui/coin-detection-discovering-opencv-with-python-1ka1</link>
      <guid>https://dev.to/tinazhouhui/coin-detection-discovering-opencv-with-python-1ka1</guid>
      <description>&lt;p&gt;This time, I would like to demonstrate the combined knowledge of edge detection and convolution and use these powers for image analysis rather than just processing. I shall use the force to detect coins in a picture and draw a circle around them!&lt;/p&gt;

&lt;p&gt;Those of you that are more familiar with computer vision might have an “I see” moment thinking of the existing function that would save me a lot of time...but since I am a promising padawan and wanted to apply all the hard-earned knowledge to practice, I went the hard way, taking the “Never tell me the odds” (Solo, Ep V) sort of approach. Haha.&lt;/p&gt;

&lt;h2&gt;
  
  
  The key logic rests upon &lt;strong&gt;five loops&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If we go backwards, to draw a circle around the coins, I need to know their radius and coordinates of their centres. To obtain that, I used convolution, &lt;a href="https://dev.to/tinazhouhui/discovering-open-cv-using-python-2iak"&gt;that I am explaining here&lt;/a&gt;, to compare the coins to reference circles of increasing sizes until more than a certain percentage of the edge pixels were aligned with the circle. Lastly, to be able to convolute the circles, I needed to find the edge of the coins, which I have explored in &lt;a href="https://dev.to/tinazhouhui/discovering-opencv-using-python-edge-detection-185g"&gt;this article&lt;/a&gt;. Sounds simple enough? Nevertheless, allow me to break down the approach to you in more detail, in case you would like to try it yourself, this time in the correct chronological order. As always, everything is &lt;a href="https://github.com/tinazhouhui/computer_vision/blob/master/image_analysis/coin_detection.py" rel="noopener noreferrer"&gt;available on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 1:&lt;/strong&gt; Find the edges
&lt;/h3&gt;

&lt;p&gt;The aim is to outline the coins and find their edge. I used Gaussian blur with 5x5 kernel and Canny edge detection to find the edges. Here are the results.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyvvg6fh6wsdu9rz0mffv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fyvvg6fh6wsdu9rz0mffv.jpg" alt="Edge Detection"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2:&lt;/strong&gt; Generate the reference circles (first two loops)
&lt;/h3&gt;

&lt;p&gt;Draw circles on a separate image from the smallest to largest (I checked the radius of the smallest and largest coin in the image to establish the boundaries). With each iteration increase the radius by one pixel. Using another loop, save the coordinates of the edge pixels to a list.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzo1a5umwuwyjbq14lmh9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzo1a5umwuwyjbq14lmh9.jpg" alt="Reference circles"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2.1:&lt;/strong&gt; Move the reference circle through coin image (third loop)
&lt;/h3&gt;

&lt;p&gt;For each reference circle generated before, move the center of reference circle through the coin image. Optimize for coin image boundaries, as there is no point to align the center of reference circle to the first corner pixel of the coin image.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbk9wim1vyml0awvs0zg9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fbk9wim1vyml0awvs0zg9.jpg" alt="Optimise for image borders"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 2.1.1:&lt;/strong&gt; Aligning the reference circles to the coin edges (fourth loop)
&lt;/h3&gt;

&lt;p&gt;For each reference circle position in the original coin image, iterate through the list of circle coordinates and align their position to the original image. Here, we need to establish two threshold values. First is the edge threshold, i.e. how many edge pixels need to match the reference circle to be considered enough. The second threshold is the intensity threshold, i.e. the minimal whiteness of the edge pixel, to be still considered a part of an edge. If both thresholds are passed, then consider the alignment as an identified coin.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmfk1tcklq7ilx4ew0gzr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmfk1tcklq7ilx4ew0gzr.jpg" alt="Align reference circles"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Step 3:&lt;/strong&gt; Draw circles around the coins (fifth loop)
&lt;/h3&gt;

&lt;p&gt;For all the identified coins' radiuses and center coordinates, draw a circle in the original image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let’s run it! ~squeal of excitement!~
&lt;/h2&gt;

&lt;p&gt;The first time I ran the code, I set the parameters for min and max radius to be 40 till 57 and had a constant stream of coordinates and radiuses printing out to see the progress. It was late so I went to bed thinking that I will see beautiful circles around coins in the morning. When I got up and went to check, the program was just passing radius 44. I mean, I expected the function to run slower, with Python being an &lt;a href="https://www.freecodecamp.org/news/compiled-versus-interpreted-languages/#:~:text=In%20a%20compiled%20language%2C%20the,reads%20and%20executes%20the%20code." rel="noopener noreferrer"&gt;interpreted language&lt;/a&gt;, but this was a bit extreme. &lt;/p&gt;

&lt;p&gt;So I optimised. Some variables were taken out of loops and were pre-processed and therefore not calculated through each iteration (e.g. circumference of reference circle). Another drastic optimisation was resizing the image to half its size. This led to a 4 times faster run as the loops had fewer pixels to go through!&lt;/p&gt;

&lt;h2&gt;
  
  
  Yes, there is an easier way
&lt;/h2&gt;

&lt;p&gt;As I hinted at the beginning, there is, of course, an easier and much more accurate way of detecting circles in OpenCV, the so-called &lt;a href="https://docs.opencv.org/master/da/d53/tutorial_py_houghcircles.html" rel="noopener noreferrer"&gt;Hough Circle Transformation&lt;/a&gt; (here is a link to an &lt;a href="https://www.pyimagesearch.com/2014/07/21/detecting-circles-images-using-opencv-hough-circles/" rel="noopener noreferrer"&gt;alternative explanation&lt;/a&gt;). Not only the function gives one result per circle (unlike the manual way, as you can see below), but because OpenCV has the implementation in C++, which is a compiled language, it also runs much faster.&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;

&lt;p&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;hough_circle_detection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;coins&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;min_r&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_r&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;&lt;br&gt;
    &lt;span class="c1"&gt;# turn original image to grayscale&lt;br&gt;
&lt;/span&gt;    &lt;span class="n"&gt;gray&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;cvtColor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;coins&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;COLOR_BGR2GRAY&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="c1"&gt;# blur grayscale image&lt;br&gt;
&lt;/span&gt;    &lt;span class="n"&gt;blurred&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;medianBlur&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;gray&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;br&gt;&lt;br&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;HoughCircles&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;br&gt;
        &lt;span class="n"&gt;blurred&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# source image (blurred and grayscaled)&lt;br&gt;
&lt;/span&gt;        &lt;span class="n"&gt;cv2&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HOUGH_GRADIENT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# type of detection&lt;br&gt;
&lt;/span&gt;        &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# inverse ratio of accumulator res. to image res.&lt;br&gt;
&lt;/span&gt;        &lt;span class="mi"&gt;40&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# minimum distance between the centers of circles&lt;br&gt;
&lt;/span&gt;        &lt;span class="n"&gt;param1&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;50&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Gradient value passed to edge detection&lt;br&gt;
&lt;/span&gt;        &lt;span class="n"&gt;param2&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# accumulator threshold for the circle centers&lt;br&gt;
&lt;/span&gt;        &lt;span class="n"&gt;minRadius&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;min_r&lt;/span&gt;&lt;span class="o"&gt;&lt;em&gt;&lt;/em&gt;&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# min circle radius&lt;br&gt;
&lt;/span&gt;        &lt;span class="n"&gt;maxRadius&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;max_r&lt;/span&gt;&lt;span class="o"&gt;&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# max circle radius&lt;br&gt;
&lt;/span&gt;    &lt;span class="p"&gt;)&lt;/span&gt;&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Here is the final comparison&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Finrj1m7g4k7w50qokib2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Finrj1m7g4k7w50qokib2.jpg" alt="Manual vs Hough"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The biggest challenge for me was to wrap my head around all the nested loops. What helped most was to literally draw the loops out to visually see the logic behind. Figuring out how to exactly do the alignment of comparison circle on the original image was another toughie. Of course, there is tons of room for improvement, but overall I am very pleased with the results. Hope you enjoyed the post and as usual, would love some feedback :) May the Python be with you.&lt;/p&gt;

</description>
      <category>python</category>
      <category>opencv</category>
      <category>imageanalysis</category>
      <category>computervision</category>
    </item>
    <item>
      <title>Discovering OpenCV with Python: Gamma correction</title>
      <dc:creator>Tina</dc:creator>
      <pubDate>Fri, 24 Jul 2020 20:06:06 +0000</pubDate>
      <link>https://dev.to/tinazhouhui/discovering-opencv-with-python-gamma-correction-3cnh</link>
      <guid>https://dev.to/tinazhouhui/discovering-opencv-with-python-gamma-correction-3cnh</guid>
      <description>&lt;p&gt;Another practical exercise that I had quite some fun with was gamma correction. This concept is mainly used when we want to adjust the brightness of an image. We could also use it to restore the faded pictures to their previous depth of colour. Since I am just a Python Padawan, we will be demonstrating this on grayscale pictures but I promise, the concept works on coloured images as well.&lt;/p&gt;

&lt;p&gt;In this short article, I will focus on the restoration of faded pictures. &lt;/p&gt;

&lt;h2&gt;
  
  
  A bit of (mathematical) background
&lt;/h2&gt;

&lt;p&gt;The logic behind is based on a concept called &lt;a href="http://spatial-analyst.net/ILWIS/htm/ilwisapp/stretch_algorithm.htm" rel="noopener noreferrer"&gt;linear stretching&lt;/a&gt;. The faded picture simply means that the values of pixels are compressed to a smaller range and therefore not using the full range of values (in grayscale that would be from 0 to 255). For example, in the faded picture below, the values of the pixels range from 101 to 160. What linear stretching does, is that it re-scales the values to their full range from 0 to 255.&lt;/p&gt;

&lt;p&gt;Here is the mathematical formula on how this can be achieved with each value:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzxptif95gyvuxz2y50ps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fzxptif95gyvuxz2y50ps.png" alt="Linear stretching"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Python implementation
&lt;/h2&gt;

&lt;p&gt;Just like in convolution, the necessary step is to cycle through every pixel and apply this mathematical formula to each of them. Be sure to check out my &lt;a href="https://github.com/tinazhouhui/computer_vision/blob/master/image_processing/gamma.py" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; to see how it can be done.&lt;/p&gt;

&lt;p&gt;And voila, below is the result, look closely at how the histogram of pixel values stretched out from the original narrow range:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxoo8ju6nzmcor2jmokwj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fxoo8ju6nzmcor2jmokwj.jpg" alt="Gamma correction"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://pinetools.com/image-histogram" rel="noopener noreferrer"&gt;Tool used to create histogram&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope you enjoyed the post, I recommend having a go at this and playing around with different images. Again, would appreciate if you let me know your thoughts. May the Python be with you.&lt;/p&gt;

</description>
      <category>python</category>
      <category>opencv</category>
      <category>imageprocessing</category>
      <category>gamma</category>
    </item>
    <item>
      <title>Discovering OpenCV using Python: Edge detection</title>
      <dc:creator>Tina</dc:creator>
      <pubDate>Sun, 19 Jul 2020 18:15:07 +0000</pubDate>
      <link>https://dev.to/tinazhouhui/discovering-opencv-using-python-edge-detection-185g</link>
      <guid>https://dev.to/tinazhouhui/discovering-opencv-using-python-edge-detection-185g</guid>
      <description>&lt;p&gt;During my learning of the ways of the force in OpenCV, it was a matter of time before encountering edge detection. This concept, that also falls under image processing, uses the same principle of convolution that &lt;a href="https://dev.to/tinazhouhui/discovering-open-cv-using-python-2iak"&gt;I wrote about before&lt;/a&gt;. The aim of the edge detection is to highlight the outline of the most important information of an image. This is one of the essential steps in object detection and directly affects machines’ understanding of the world. &lt;/p&gt;

&lt;h1&gt;
  
  
  So what is an edge?
&lt;/h1&gt;

&lt;p&gt;First of all, it is important to understand that the real world is continuous whereas a picture of the real world is made up of discrete (individual) pixels. Now let’s zoom in on the pixels for one second:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmbn1y4f1jvzv2o0pd63q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fmbn1y4f1jvzv2o0pd63q.jpg" alt="Edge identification" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is obvious to all of us, that the edge is where the red line is indicating; however, it is not possible to draw the edge between the smallest two units (not entirely true, there are methods, like super-sampling). Therefore, the computer checks whether a pixel is on an edge by comparing the pixel’s adjacent neighbors. Here, if we were to go from left to right, pixel number 2 is an edge as there is a significant difference between 1 and 3. Same applies for pixel number 3. The result would look something like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fa7915aqtdn24l5aaq47d.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fa7915aqtdn24l5aaq47d.jpg" alt="Edge identification result" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nevertheless, in real life, pictures also include gray pixels with various intensity (value from 1 to 254). Therefore, to see how much the gradient changed (see also &lt;a href="https://en.wikipedia.org/wiki/Image_gradient"&gt;gradient magnitude&lt;/a&gt; if you are more interested), we use the first derivative to indicate the magnitude of change. It is then up to us to establish a threshold that would determine whether the magnitude of change is large enough to be considered an edge. To take it one step further, we could also use the second derivative and look for zero-crossing values, which capture the local maxima in the picture gradient (see image below). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkjlsuwegbxd3uwa4iync.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fkjlsuwegbxd3uwa4iync.jpg" alt="First and second order derivative" width="800" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In addition, due to the nature of edge detection, they are quite sensitive to noise (think about poor image quality; therefore, it is highly recommended to blur the image in order to smooth the noise out and therefore increase the accuracy of the edges. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Said in simpler words, first order derivatives are best for identification of strong edges by establishing a threshold whereas second order derivatives are best for locating where the edge is. Both types are noise sensitive, so blur the images first to obtain best results.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Examples of edge detection
&lt;/h1&gt;

&lt;p&gt;I will be applying various types of edge detection methods on this original image below. Be sure to check out the code on my &lt;a href="https://github.com/tinazhouhui/computer_vision/blob/master/image_processing/edge_detection.py"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe1vun57sm0hmqvk9k1tr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fe1vun57sm0hmqvk9k1tr.jpg" alt="Original image" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://envato-shoebox-0.imgix.net/5c82/65c1-5009-4fb0-9df8-19d302da34bc/021718+%285%29.jpg?auto=compress%2Cformat&amp;amp;fit=max&amp;amp;mark=https%3A%2F%2Felements-assets.envato.com%2Fstatic%2Fwatermark2.png&amp;amp;markalign=center%2Cmiddle&amp;amp;markalpha=18&amp;amp;w=1600&amp;amp;s=d3c05b54ebb94a03b503b04eeb65e2b8"&gt;Original image source&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  First order derivative - Sobel operator
&lt;/h3&gt;

&lt;p&gt;Sobel edge detector is a first order derivative edge detection method. It calculates the gradients separately along the X axis and Y axis. The kernels already incorporate a smoothing effect. There are many other types of kernels like &lt;a href="https://theailearner.com/2019/05/24/first-order-derivative-kernels-for-edge-detection/"&gt;Scharr or Prewitt&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faez8oighy7eou7vre8hh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Faez8oighy7eou7vre8hh.jpg" alt="Sobel kernel" width="800" height="138"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frbj2miwjff29z9ncr7u0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frbj2miwjff29z9ncr7u0.jpg" alt="Sobel edge detection" width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Second order derivative - Laplacian operator
&lt;/h3&gt;

&lt;p&gt;Unlike Sobel operator, Laplacian uses only one kernel, nevertheless, more popular use is to combine it with the Gaussian blur in order to reduce noise. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcxe17lw1caeqb1kar7hg.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fcxe17lw1caeqb1kar7hg.jpg" alt="Laplacian kernel" width="800" height="167"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj2dywdq8gx4x5719nou2.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj2dywdq8gx4x5719nou2.jpg" alt="Laplacian edge detection" width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Canny edge detection
&lt;/h3&gt;

&lt;p&gt;Canny edge detector is probably one of the most widely used edge detection methods. The reason why it is superior to the so far mentioned methods is due to &lt;a href="https://en.wikipedia.org/wiki/Canny_edge_detector#Non-maximum_suppression"&gt;“non-maximum suppression”&lt;/a&gt; to produce a one pixel thick edge. The detection consists of many steps, check it out &lt;a href="http://justin-liang.com/tutorials/canny/"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3wrbv4sv88e9bdmwjc2n.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F3wrbv4sv88e9bdmwjc2n.jpg" alt="Canny edge detection" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Real life application
&lt;/h1&gt;

&lt;p&gt;Edge detection is just a first step that is necessary for further image analysis like shape detection or object identification. Apart from the most obvious use in machine vision (i.e. self driving vehicles), it is also widely used on analysis of medical images to help with diagnostics or even identify pathological objects like tumor. &lt;/p&gt;

&lt;p&gt;All in all, this was yet again a fun way to play around with OpenCV, let me know what you think and may the Python be with you.&lt;/p&gt;

</description>
      <category>python</category>
      <category>edgedetection</category>
      <category>opencv</category>
      <category>imageprocessing</category>
    </item>
    <item>
      <title>Discovering OpenCV using Python: Convolution</title>
      <dc:creator>Tina</dc:creator>
      <pubDate>Sun, 12 Jul 2020 20:28:44 +0000</pubDate>
      <link>https://dev.to/tinazhouhui/discovering-open-cv-using-python-2iak</link>
      <guid>https://dev.to/tinazhouhui/discovering-open-cv-using-python-2iak</guid>
      <description>&lt;p&gt;It has been three months since I started my Python development journey and under the watchful eye of my snakey Jedi master (read boyfriend working as developer) I have started to explore the mighty &lt;a href="https://opencv.org/about/#:~:text=OpenCV%20(Open%20Source%20Computer%20Vision,perception%20in%20the%20commercial%20products.)"&gt;OpenCV library&lt;/a&gt;. I first heard about computer vision in connection with self-driving cars and how they identify objects; however, computer vision is much more powerful than just allowing computers to "see" the real world.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The goal of this lesson was to grasp the principle of convolution that acts as a building stone of most image processing functions.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As master Kenobi said, "the Force is what gives a Jedi his/her power" and naturally, as I am a smart padawan, I immediately grasped the meaning of these wise words in these technological times: use Python.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's first learn the basics of OpenCV
&lt;/h2&gt;

&lt;p&gt;My first interactions with OpenCV were quite harmonious, I explored some basic image manipulation functions on image processing using useful resources on the internet and of course the &lt;a href="https://docs.opencv.org/2.4/index.html"&gt;documentation&lt;/a&gt; itself (read an image, draw a line, change color, blend two images...). After that, when I was deemed ready, it was time to get serious.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is convolution?
&lt;/h2&gt;

&lt;p&gt;Convolution is a mathematical way of combining two signals to form a third signal (&lt;a href="https://www.analog.com/media/en/technical-documentation/dsp-book/dsp_book_Ch6.pdf"&gt;Digital Signal Processing&lt;/a&gt;). To really understand this I-still-don't-get-it definition, I manually went through the whole process by implementing a simple 3x3 matrix.&lt;/p&gt;

&lt;p&gt;To put it in simple words, imagine a picture, which consists of many pixels. For simplicity, let's say the image is in gray-scale. The process starts with taking a pixel (which really is just a value between 0 till 255, 0 being black and 255 being white) and considering that as "center". Now, take the convolution matrix (also called kernel), align it to the center pixel and identify the center's local neighbors up to the size of the convolution matrix. Multiply this new identified matrix with the convolution matrix (both are of same size) and add up their products (for excel geeks, SUMPRODUCT). Save the sumproduct as a value of the transformed pixel in the same location as center pixel but in a new image (not the original, as that would disrupt the values of the neighboring pixels). Doing so for every pixel of the original image is convolution (see image below).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsyzp8it6oxl6yut4ayjt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fsyzp8it6oxl6yut4ayjt.png" alt="Convolution explained" width="800" height="174"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="http://peltarion.com/knowledge-center/documentation/modeling-view/build-an-ai-model/blocks/2d-convolution-block"&gt;image source&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's get back to image processing
&lt;/h2&gt;

&lt;p&gt;If you have ever edited an image to increase blur or to sharpen, then you have experienced the practical use of convolution in image processing. The transformation depends on the values and shape of the convolution matrix and thanks to the smart people who are willing to share, there are many useful sources that already provide the required matrices for various transformations. Here are the results from my application of some matrices:&lt;/p&gt;

&lt;h3&gt;
  
  
  Image blurring
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8hyhj21w08flh71d4lb6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2F8hyhj21w08flh71d4lb6.jpg" alt="Blur using low filter and Gaussian blur 5x5" width="800" height="179"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://pxhere.com/en/photo/640846"&gt;original image source&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Image sharpening
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj6ohyl3tzub5s55kzaw0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fj6ohyl3tzub5s55kzaw0.jpg" alt="Image sharpening" width="800" height="283"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://cdn.vox-cdn.com/thumbor/UEgUJSxW4zcD8SfUCCK8YXRxwtg=/0x0:4987x3740/2120x1413/filters:focal(0x0:4987x3740):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/45503430/453801468.0.0.jpg"&gt;original image source&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  High pass filter
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frjlthc9n7gi9pupj75zh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Frjlthc9n7gi9pupj75zh.jpg" alt="Alt Text" width="800" height="267"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://s3.amazonaws.com/cdn-origin-etr.akc.org/wp-content/uploads/2017/11/12231413/Labrador-Retriever-MP.jpg"&gt;original image source&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All work is of course available and documented on &lt;a href="https://github.com/tinazhouhui/computer_vision/blob/master/image_processing/convo.py"&gt;GitHub&lt;/a&gt;, so check it out!&lt;/p&gt;

&lt;p&gt;It is important to mention that convolution is not used only in image processing, but it is a powerful method that is applied in various fields (mathematics, digital signal processing, audio processing, machine learning, ...). Though the explanation provided above is closely related to image processing, the principle behind it is same for every application.&lt;/p&gt;

&lt;h2&gt;
  
  
  In conclusion
&lt;/h2&gt;

&lt;p&gt;For those that are also at the beginning of the journey, I wholeheartedly recommend getting your hands dirty by playing around with these matrices and explore OpenCV. Fair word of warning, it is quite math intense so brush up on your derivations and matrice operations. May the Python be with you.&lt;/p&gt;

</description>
      <category>beginners</category>
      <category>python</category>
      <category>opencv</category>
      <category>imageprocessing</category>
    </item>
  </channel>
</rss>
