<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mohammad Quanit</title>
    <description>The latest articles on DEV Community by Mohammad Quanit (@mquanit).</description>
    <link>https://dev.to/mquanit</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mquanit"/>
    <language>en</language>
    <item>
      <title>Asteroids Game with Amazon Q: My Journey with AI-Assisted Game Development</title>
      <dc:creator>Mohammad Quanit</dc:creator>
      <pubDate>Wed, 28 May 2025 10:27:53 +0000</pubDate>
      <link>https://dev.to/aws-builders/asteroids-game-with-amazon-q-my-journey-with-ai-assisted-game-development-2pge</link>
      <guid>https://dev.to/aws-builders/asteroids-game-with-amazon-q-my-journey-with-ai-assisted-game-development-2pge</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Recently, I participated in the Amazon Q challenge by creating a classic Asteroids game using Python and Pygame. What made this project unique was leveraging Amazon Q's AI capabilities to assist with the development process.&lt;/p&gt;

&lt;p&gt;For the organizers and anyone who wants to play this game:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone this GitHub &lt;a href="https://github.com/mohammad-quanit/aws-q-game-challenge" rel="noopener noreferrer"&gt;repository&lt;/a&gt; for the game codebase, &lt;/li&gt;
&lt;li&gt;Go to folder &lt;code&gt;aws-q-game-challenge&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;source venv/bin/activate&lt;/code&gt; in bash&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;python asteroids.py&lt;/code&gt; in bash.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiy9iwrhqn8pd9a6noyto.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiy9iwrhqn8pd9a6noyto.png" alt="Game Screenshot" width="800" height="627"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this blog post, I'll share how I set up Amazon Q&lt;br&gt;
CLI, the prompts I used to create the game, and how I enhanced it with custom assets. &lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Amazon Q CLI
&lt;/h2&gt;

&lt;p&gt;Before diving into game development, I needed to set up Amazon Q CLI on my macOS system. Here's how I did it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Download the AWS Q CLI (if not already installed) from this link: &lt;a href="https://desktop-release.q.us-east-amazonaws.com/latest/Amazon%20Q.dmg" rel="noopener noreferrer"&gt;AWS Q CLI for MacOS&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start a chat session:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;q chat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With Amazon Q CLI set up, I was ready to build my game.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Development Process
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Initial Game Creation
&lt;/h3&gt;

&lt;p&gt;I started by asking Amazon Q to help me create a basic Asteroids game. My first prompt was:&lt;/p&gt;

&lt;p&gt;"I want to create a game using pygame, named asteroid_game. Can you help me with the basic structure?"&lt;/p&gt;

&lt;p&gt;Amazon Q provided me with a complete initial implementation that included:&lt;br&gt;
• Game window setup&lt;br&gt;
• Player spaceship controls&lt;br&gt;
• Asteroid generation&lt;br&gt;
• Collision detection&lt;br&gt;
• Scoring system&lt;br&gt;
• Game loop structure&lt;/p&gt;

&lt;p&gt;The initial version used simple geometric shapes for the spaceship and asteroids, which worked well but lacked visual appeal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhancing with Real Images
&lt;/h3&gt;

&lt;p&gt;To make the game more visually appealing, I asked Amazon Q:&lt;/p&gt;

&lt;p&gt;"I've created a game using pygame, named asteroid_game. I want to use real stone images and a real spaceship for shooting."&lt;/p&gt;

&lt;p&gt;Amazon Q then:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Created an assets directory structure&lt;/li&gt;
&lt;li&gt;Modified the code to load and use external images&lt;/li&gt;
&lt;li&gt;Added fallback mechanisms if images weren't available&lt;/li&gt;
&lt;li&gt;Provided guidance on where to find free game assets&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Customizing the Spaceship
&lt;/h3&gt;

&lt;p&gt;I wanted to make the spaceship more prominent, so I used this prompt:&lt;/p&gt;

&lt;p&gt;"Make the spaceship a bit bigger and move it to more bottom side of the game screen."&lt;/p&gt;

&lt;p&gt;Amazon Q adjusted the code to:&lt;br&gt;
• Increase the spaceship size from 40x50 to 60x75 pixels&lt;br&gt;
• Position the ship closer to the bottom of the screen&lt;br&gt;
• Adjust the bullet spawn position to match the new ship size&lt;/p&gt;

&lt;h2&gt;
  
  
  Game Assets
&lt;/h2&gt;

&lt;p&gt;For the visual elements of the game, I used the following assets:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Spaceship: A top-down view spaceship image (spaceship.png) placed in the assets folder&lt;/li&gt;
&lt;li&gt;Asteroids: Three different asteroid images for different sizes:
• asteroid_large.png
• asteroid_medium.png
• asteroid_small.png&lt;/li&gt;
&lt;li&gt;Sound Effects: A laser.wav file for the shooting sound&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I sourced these assets from free game asset websites like:&lt;br&gt;
• OpenGameArt.org&lt;br&gt;
• Kenney.nl (which has excellent space game assets)&lt;br&gt;
• Itch.io (free assets section)&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features of the Game
&lt;/h2&gt;

&lt;p&gt;The final game includes several features that make it engaging:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Progressive Difficulty: As your score increases, the game levels up, and asteroids appear more frequently&lt;/li&gt;
&lt;li&gt;Asteroid Splitting: When shot, larger asteroids split into smaller ones&lt;/li&gt;
&lt;li&gt;Lives System: Players have multiple lives before game over&lt;/li&gt;
&lt;li&gt;Invulnerability Period: Brief invulnerability after being hit&lt;/li&gt;
&lt;li&gt;Visual Effects: Stars moving in the background for a space atmosphere&lt;/li&gt;
&lt;li&gt;Score Tracking: Points awarded based on asteroid size&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;Working with Amazon Q to develop this game taught me several valuable lessons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AI-Assisted Development: Amazon Q can significantly speed up development by providing complete, working code examples.&lt;/li&gt;
&lt;li&gt;Iterative Improvement: Starting with a basic version and enhancing it step by step worked well.&lt;/li&gt;
&lt;li&gt;Asset Integration: Adding real images greatly improved the visual appeal with minimal code changes.&lt;/li&gt;
&lt;li&gt;Prompt Engineering: Being specific in my requests to Amazon Q yielded better results.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Creating an Asteroids game with Amazon Q was a fascinating experience that demonstrated how AI can assist in game development. A combination of Amazon Q's code generation capabilities and my creative direction resulted in a fun, playable game that I'm proud to submit for the challenge.&lt;/p&gt;

&lt;p&gt;The most impressive aspect was how quickly I could iterate on the game design, from basic shapes to a visually appealing game with real assets, all through conversational prompts with Amazon Q.&lt;/p&gt;

&lt;p&gt;If you're interested in game development but find coding challenging, or if you're an experienced developer looking to speed up your workflow, I highly recommend giving Amazon Q a try. It's like having a knowledgeable programming partner who's always ready to help.&lt;/p&gt;

&lt;p&gt;This project demonstrates the potential of AI-assisted development tools like Amazon Q to democratize game creation and make coding more accessible to everyone.&lt;/p&gt;

</description>
      <category>awschallenge</category>
      <category>ai</category>
      <category>llm</category>
      <category>gamedev</category>
    </item>
    <item>
      <title>Email Verifier using Go</title>
      <dc:creator>Mohammad Quanit</dc:creator>
      <pubDate>Sun, 12 Jan 2025 08:23:20 +0000</pubDate>
      <link>https://dev.to/mquanit/email-verifier-using-go-56g5</link>
      <guid>https://dev.to/mquanit/email-verifier-using-go-56g5</guid>
      <description>&lt;p&gt;Hello everyone, it's been quite some time since I wrote a tech blog, so I thought I should share something that I've done in my company. There was a requirement where I had to do some verification checks on email and I was using Go on that project, so in this blog, I'll be sharing how I did that and also for you guys to know how email verification and its internals work.&lt;br&gt;
I am sharing here it as a mini project so you can follow what I am doing and share feedback if you like.&lt;/p&gt;

&lt;p&gt;Building an email verifier tool in Go involves several components and considerations.&lt;/p&gt;

&lt;p&gt;I am not going to deep dive into all of that but covering some of them to get you an understanding of the email verification process.&lt;/p&gt;

&lt;p&gt;The first thing that we need to verify is a domain that we were getting from some input e.g. &lt;code&gt;google.com&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;Steps to start your project in Go in the terminal/cmd:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;1. go mod init github.com/username/email-verifier

2. &lt;span class="nb"&gt;touch &lt;/span&gt;main.go
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Starting the implementation in our &lt;code&gt;main.go&lt;/code&gt; file.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"bufio"&lt;/span&gt;
    &lt;span class="s"&gt;"log"&lt;/span&gt;
    &lt;span class="s"&gt;"net"&lt;/span&gt;
    &lt;span class="s"&gt;"os"&lt;/span&gt;
    &lt;span class="s"&gt;"strings"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;scanner&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;bufio&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewScanner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stdin&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;scanner&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Scan&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;verifyDomain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scanner&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;scanner&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error: could not read from input %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code snippet, the NewScanner function will take the domain as input from the terminal or cmd from the user. Then the for loop will continuously scan the very next input after every time the &lt;code&gt;verifyDomain&lt;/code&gt; function is invoked. And if there is some error it will be printed on the console as simple as that. &lt;/p&gt;

&lt;p&gt;Now, the fun part will be covered in the &lt;code&gt;verifyDomain&lt;/code&gt; function, where every call happens to the specific domain through HTTP.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;verifyDomain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;domain&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;hasMX&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hasSPF&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hasDMARC&lt;/span&gt; &lt;span class="kt"&gt;bool&lt;/span&gt;
   &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;spfRecord&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dmarcRecord&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When we verify email through its internals, we need some components to check whether it is valid. Those components are records and protocols used to manage and secure email delivery. Here's a breakdown:&lt;/p&gt;

&lt;h2&gt;
  
  
  MX Record
&lt;/h2&gt;

&lt;p&gt;An MX (mail exchange) record specifies mail servers responsible for receiving emails on behalf of a domain. When someone sends an email to &lt;code&gt;user@example.com,&lt;/code&gt; the sender's mail server queries the DNS to find the MX records for example.com.&lt;/p&gt;

&lt;p&gt;Here's how to look at MX records using Go,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;      &lt;span class="c"&gt;// MX record&lt;/span&gt;
    &lt;span class="n"&gt;mxRecords&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LookupMX&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error: could not find MX record for %s due to %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mxRecords&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;hasMX&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  SPF
&lt;/h2&gt;

&lt;p&gt;SPF (Sender Policy Framework) is an email authentication method that specifies which mail servers are authorized to send emails on behalf of a domain. When an email is received, the recipient's server checks the sender's IP against the domain's SPF record to verify the email is legitimate. Checking SPF can help identify spoofed emails.&lt;/p&gt;

&lt;p&gt;SPF example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;v&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;spf1 ip4:192.0.2.0/24 include:_spf.google.com &lt;span class="nt"&gt;-all&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;v=spf1&lt;/code&gt;: Indicates the SPF version.&lt;br&gt;
&lt;code&gt;ip4:192.0.2.0/24&lt;/code&gt;: Specifies allowed IP ranges.&lt;br&gt;
&lt;code&gt;include:_spf.google.com&lt;/code&gt;: Includes Google's SPF records.&lt;br&gt;
&lt;code&gt;-all&lt;/code&gt;: Reject emails from unauthorized sources.&lt;/p&gt;

&lt;p&gt;Here's how to look at SPF using Go,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt; &lt;span class="c"&gt;// SPF record&lt;/span&gt;
    &lt;span class="n"&gt;txtRecords&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LookupTXT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"spf."&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error: could not find SPF record for %s due to %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;txtRecords&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;strings&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HasPrefix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"v=spf1"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;hasSPF&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
            &lt;span class="n"&gt;spfRecord&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  DMARC
&lt;/h2&gt;

&lt;p&gt;DMARC (Domain-based Message Authentication, Reporting, and Conformance) builds on SPF and DKIM (DomainKeys Identified Mail) to provide additional email authentication and reporting capabilities. It specifies how to handle emails that fail SPF or DKIM checks (e.g., reject, quarantine, or do nothing). The domain owner publishes a DMARC policy as a DNS TXT record. DMARC helps assess the security posture of a domain.&lt;/p&gt;

&lt;p&gt;DMARC. example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;v&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;DMARC1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;reject&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;rua&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mailto:dmarc-reports@example.com&lt;span class="p"&gt;;&lt;/span&gt;  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;v=DMARC1&lt;/code&gt;: Indicates the DMARC version.&lt;br&gt;
&lt;code&gt;p=reject&lt;/code&gt;: Reject emails that fail authentication.&lt;br&gt;
&lt;code&gt;rua&lt;/code&gt;: Email address for aggregate reports.&lt;/p&gt;

&lt;p&gt;Here's how to look at DMARC using Go,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// DMARC record&lt;/span&gt;
    &lt;span class="n"&gt;dmarcRecords&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LookupTXT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"_dmarc."&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error: could not find DMARC record for %s due to %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;dmarcRecords&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;strings&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HasPrefix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"v=DMARC1"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;hasDMARC&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
            &lt;span class="n"&gt;dmarcRecord&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is the complete code for verifying email in Go.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"bufio"&lt;/span&gt;
    &lt;span class="s"&gt;"log"&lt;/span&gt;
    &lt;span class="s"&gt;"net"&lt;/span&gt;
    &lt;span class="s"&gt;"os"&lt;/span&gt;
    &lt;span class="s"&gt;"strings"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;verifyDomain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;domain&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;hasMX&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hasSPF&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hasDMARC&lt;/span&gt; &lt;span class="kt"&gt;bool&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;spfRecord&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dmarcRecord&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;

    &lt;span class="c"&gt;// MX record&lt;/span&gt;
    &lt;span class="n"&gt;mxRecords&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LookupMX&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error: could not find MX record for %s due to %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nb"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mxRecords&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;hasMX&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c"&gt;// SPF record&lt;/span&gt;
    &lt;span class="n"&gt;txtRecords&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LookupTXT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"spf."&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error: could not find SPF record for %s due to %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;txtRecords&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;strings&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HasPrefix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"v=spf1"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;hasSPF&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
            &lt;span class="n"&gt;spfRecord&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c"&gt;// DMARC record&lt;/span&gt;
    &lt;span class="n"&gt;dmarcRecords&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LookupTXT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"_dmarc."&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error: could not find DMARC record for %s due to %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;dmarcRecords&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;strings&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;HasPrefix&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;record&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"v=DMARC1"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;hasDMARC&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
            &lt;span class="n"&gt;dmarcRecord&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;record&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Printf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Domain: %v,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt; MX: %v,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt; SPF:  %v,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt; DMARC:  %v,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt; SPF Rec: %v,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt; DMARC Rec %v,&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;domain&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hasMX&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hasSPF&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hasDMARC&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;spfRecord&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dmarcRecord&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;scanner&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;bufio&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewScanner&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stdin&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;scanner&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Scan&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;verifyDomain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scanner&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Text&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;scanner&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Fatalf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Error: could not read from input %v&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you run &lt;code&gt;go run main.go&lt;/code&gt; in the terminal, you will provide a domain name e.g. &lt;code&gt;google.com&lt;/code&gt; or &lt;code&gt;resend.com&lt;/code&gt;, it will find the records using the HTTP library that we used and return the response.&lt;/p&gt;

&lt;p&gt;Example Response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go run main.go
google.com
2025/01/12 13:15:26 Error: could not find SPF record &lt;span class="k"&gt;for &lt;/span&gt;google.com due to lookup spf.google.com on 192.168.1.1:53: no such host
2025/01/12 13:15:26 Domain: google.com,
 MX: &lt;span class="nb"&gt;true&lt;/span&gt;,
 SPF:  &lt;span class="nb"&gt;false&lt;/span&gt;,
 DMARC:  &lt;span class="nb"&gt;true&lt;/span&gt;,
 SPF Rec: ,
 DMARC Rec &lt;span class="nv"&gt;v&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;DMARC1&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;p&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;reject&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nv"&gt;rua&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mailto:mailauth-reports@google.com,
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;if you like this article, please like and share it with your Gopher friends and follow me on &lt;a href="https://www.linkedin.com/in/mquanit/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;, &lt;a href="https://github.com/Mohammad-Quanit" rel="noopener noreferrer"&gt;Github&lt;/a&gt;, &lt;a href="https://x.com/mquanit" rel="noopener noreferrer"&gt;Twitter/X&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Peace ✌🏻&lt;/p&gt;

</description>
      <category>go</category>
      <category>dns</category>
      <category>google</category>
      <category>backenddevelopment</category>
    </item>
    <item>
      <title>AWS Local Zones: Enabling Low Latency Infrastructure Workloads</title>
      <dc:creator>Mohammad Quanit</dc:creator>
      <pubDate>Sat, 20 Jul 2024 13:15:36 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-local-zones-enabling-low-latency-infrastructure-workloads-3g9</link>
      <guid>https://dev.to/aws-builders/aws-local-zones-enabling-low-latency-infrastructure-workloads-3g9</guid>
      <description>&lt;p&gt;Many cloud customers have applications that require single-digit millisecond latency for end users, such as real-time gaming, financial trading platforms, healthcare diagnostics, live media broadcasting, and AR/VR. When an AWS Region isn't close enough to meet these latency requirements, Customers must provision their infrastructure and use different APIs and tools to build their applications for low latency. &lt;/p&gt;

&lt;p&gt;That is where &lt;a href="https://aws.amazon.com/about-aws/global-infrastructure/localzones/" rel="noopener noreferrer"&gt;AWS Local Zones&lt;/a&gt; comes into play. This article explores the concept of AWS Local Zones, their benefits, and real-world use cases that demonstrate their value in modern cloud architecture.&lt;/p&gt;




&lt;h2&gt;
  
  
  AWS Local Zone
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/about-aws/global-infrastructure/localzones/" rel="noopener noreferrer"&gt;AWS Local Zones&lt;/a&gt; is a specialized infrastructure deployment option offered by Amazon Web Services (AWS). These zones are strategically located closer to end-users, allowing for ultra-low latency and the ability to support workloads that require rapid response times. By placing core services such as Compute, Storage, Network, and other selected services in these geographically dispersed locations, AWS enables customers to deploy applications that demand single-digit millisecond latencies for seamless user experiences, as well as to support on-premises data centers where proximity is crucial.&lt;/p&gt;

&lt;p&gt;You might be thinking what about the AWS Regions and Zones and how does it fit with AWS Local Zones? Well, AWS Local Zones extend AWS regions (parent region) and are located close to a large population, industry, and IT centers. It then connects to their parent region via Amazon’s redundant and high-bandwidth network giving applications running in AWS Local Zones access to the rest of AWS services.&lt;/p&gt;

&lt;p&gt;Below is the Local Zone Workload visualization from official AWS documentation.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feavpfipab2iwckdiqntf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feavpfipab2iwckdiqntf.png" alt="AWS Local Zone Workload" width="800" height="290"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  How Local Zones Work
&lt;/h2&gt;

&lt;p&gt;A Local Zone is an extension of an AWS Region that is located close to your users geographically. Local Zones have their own internet connections and support AWS Direct Connect. This allows resources created in a Local Zone to serve applications that require low latency.&lt;/p&gt;

&lt;p&gt;To use a Local Zone, you must first enable it. Next, you create a subnet in the Local Zone. Finally, you launch resources in the Local Zone subnet. For more detailed instructions, see Getting Started with AWS Local Zones.&lt;/p&gt;

&lt;p&gt;The following diagram illustrates an account with a VPC in the AWS Region &lt;code&gt;us-west-2&lt;/code&gt; that is extended to the Local Zone &lt;code&gt;us-west-2-lax-1&lt;/code&gt;. Each zone in the VPC has one subnet, and each subnet has one EC2 instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4vqxtvr8yhwgrsjib6g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4vqxtvr8yhwgrsjib6g.png" alt="Local Zone VPC" width="551" height="321"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Benefits of AWS Local Zones
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Low Latency&lt;/strong&gt;: By bringing AWS services closer to the user, Local Zones reduce latency, providing faster response times and improved user experience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hybrid Cloud Flexibility&lt;/strong&gt;: Local Zones support hybrid cloud architectures, enabling seamless integration with on-premises infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data Residency Compliance&lt;/strong&gt;: Helps meet local data residency requirements by keeping data within specific geographic boundaries.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Disaster Recovery&lt;/strong&gt;: Provides additional options for disaster recovery and business continuity planning.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  AWS Local Zones Use-Cases
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Media Production and Live Broadcasting&lt;/strong&gt;&lt;br&gt;
A media company can use AWS Local Zones to host live video streaming applications. By processing live video streams in Local Zones, the company can achieve lower latency, ensuring high-quality, real-time broadcasting for viewers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-Time Multiplayer Gaming&lt;/strong&gt;&lt;br&gt;
A gaming company can deploy game servers in Local Zones to provide low-latency connections for players in specific geographic regions. This ensures a smooth and responsive gaming experience, crucial for maintaining player engagement and satisfaction.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Healthcare Imaging and Diagnostics&lt;/strong&gt;&lt;br&gt;
A healthcare provider can use Local Zones to process medical images (e.g., X-rays, MRIs) close to the point of generation. This reduces the time required for analysis and enables faster diagnosis, enhancing patient care.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Financial Trading Platforms&lt;/strong&gt;&lt;br&gt;
A financial services firm can deploy trading applications in Local Zones to minimize latency in transaction processing. This ensures faster trade execution and helps the firm meet regulatory requirements for data residency and security.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Augmented and Virtual Reality (AR/VR)&lt;/strong&gt;&lt;br&gt;
An AR/VR company can use Local Zones to host the backend infrastructure for immersive experiences. Low-latency processing is critical for AR/VR applications to ensure a seamless and responsive user experience.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Getting Started with Local Zone
&lt;/h2&gt;

&lt;p&gt;To get started with AWS Local Zones, you must first enable a Local Zone through the Amazon EC2 console or the AWS CLI. Next, create a subnet in a VPC in the parent Region, specifying the Local Zone when you create it. Finally, create AWS resources in the Local Zone subnet.&lt;/p&gt;

&lt;p&gt;Local Zone from EC2 in my AWS console&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh060crkv7szdqr1j3ndi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh060crkv7szdqr1j3ndi.png" alt="Local Zone in Console" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enabling Local Zone in my region &lt;code&gt;us-east-1&lt;/code&gt; and it will be in US East Atlanta.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzowsruoz2e4uwwj08oh2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzowsruoz2e4uwwj08oh2.png" alt="Enable Local Zone" width="544" height="409"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltibtq3guminu4earueb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltibtq3guminu4earueb.png" alt="Local Zone in us-east-atlanta" width="800" height="93"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now I am going to set up a Local Zone VPC subnet. When you add a subnet, you just need to specify a range of IP addresses for that subnet. You can also choose to include a different range of IP addresses specifically for IPv6 if that's something your network uses. You get to decide which Local Zone this subnet belongs to, and you can have several subnets all in the same Local Zone if you need to.&lt;/p&gt;

&lt;p&gt;Below are the details for Subnet in our Local Zone. As you can see I selected &lt;u&gt;Availability Zone&lt;/u&gt; to our newly enabled &lt;u&gt;Atlanta Local Zone&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3bovz3b4gpjo3fvdak9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3bovz3b4gpjo3fvdak9.png" alt="Subnet Creation in Local Zone" width="552" height="826"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After creating a subnet for your local zone, Create a resource in your Local Zone subnet. I'm going to deploy the EC2 server in that local zone within this newly created subnet named &lt;code&gt;lz-subnet&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now I am not going to show you how to create an EC2 as it is not the scope of this article but make sure to edit &lt;strong&gt;Network settings&lt;/strong&gt;, &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Select your VPC.&lt;/li&gt;
&lt;li&gt;Select your Local Zone subnet.&lt;/li&gt;
&lt;li&gt;Enable or disable Auto-assign public IP.&lt;/li&gt;
&lt;li&gt;Create a security group or select an existing one.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;After creating an EC2, you should see these configurations for your EC2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vq1uz5uz06xzjk2fl5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0vq1uz5uz06xzjk2fl5x.png" alt="EC2 Instance on Local Zone Subnet" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Notice, that I chose the &lt;code&gt;c6i.large&lt;/code&gt; instance type because to Atlanta local zone doesn't support t2 series machines. Read &lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/02/aws-new-local-zone-atlanta/" rel="noopener noreferrer"&gt;here&lt;/a&gt; about Atlanta Local Zone machine support. &lt;/p&gt;




&lt;h2&gt;
  
  
  Clean up
&lt;/h2&gt;

&lt;p&gt;When you are finished doing hands-on with Local Zone, delete the resources in the Local Zone. Then contact AWS Support to disable it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;AWS Local Zones provide specialized infrastructure that brings AWS services closer to end-users, enabling low-latency and high-performance workloads. By leveraging Local Zones, businesses can enhance their applications' performance, comply with data residency requirements, and improve overall user experience. Whether it's media production, gaming, healthcare, financial services, or AR/VR, AWS Local Zones offers versatile solutions for various industry needs.&lt;/p&gt;

&lt;p&gt;If you learn something new, share this blog with AWS fellas and network, and do follow me on my socials:&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/mquanit/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;, &lt;a href="https://github.com/Mohammad-Quanit" rel="noopener noreferrer"&gt;Github&lt;/a&gt;, &lt;a href="https://x.com/mquanit" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Peace ✌🏻&lt;/p&gt;

</description>
      <category>aws</category>
      <category>infrastructure</category>
      <category>localzone</category>
      <category>latency</category>
    </item>
    <item>
      <title>Trace &amp; Observe Modern Apps using AWS X-Ray</title>
      <dc:creator>Mohammad Quanit</dc:creator>
      <pubDate>Fri, 08 Mar 2024 10:44:12 +0000</pubDate>
      <link>https://dev.to/aws-builders/trace-observe-modern-apps-using-aws-x-ray-2bbl</link>
      <guid>https://dev.to/aws-builders/trace-observe-modern-apps-using-aws-x-ray-2bbl</guid>
      <description>&lt;p&gt;Hi fellas, In this blog I am going to share my experience using one of the coolest AWS services named &lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html" rel="noopener noreferrer"&gt;AWS X-Ray&lt;/a&gt;. &lt;em&gt;AWS X-Ray&lt;/em&gt; is a fully managed monitoring and observability service that helps you collect data about requests that the application serves. It provides tools that enable you to filter, view, and gain insights into the collected data, helping you identify optimization opportunities and issues. &lt;/p&gt;

&lt;p&gt;AWS X-Ray is a service that can help you figure out what's going on across all your systems. It lets you see how requests are routed through different service touchpoints and gives you a good idea of how your applications are performing. You can use it to monitor performance, identify bottlenecks, and troubleshoot errors. With AWS X-Ray, you can keep an eye on your systems and make sure everything's running smoothly. It allows you to visualize complex and detailed service relationships within highly distributed applications. You can trace message pathways and call stacks at any scale. &lt;/p&gt;

&lt;p&gt;Below is the AWS X-Ray workflow image from AWS's official blog.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55y9ufu0n4efvkgw4cr1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55y9ufu0n4efvkgw4cr1.png" alt="AWS-XRay flow" width="800" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS X-Ray has been designed to work seamlessly with distributed systems. Over the last decade or two, as complex distributed systems have emerged, debugging has changed and has taken on a new meaning. Engineers can now analyze and debug applications, audit their applications securely, and compile data from AWS resources to determine bottlenecks in cloud architecture and improve application performance.&lt;/p&gt;

&lt;p&gt;Now let's see the implementation of AWS X-Ray with a nodejs application. AWS X-Ray provides an SDK that can be imported and utilized within your application.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;express&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;express&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;express&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;serviceName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;HELLO-MICROSERVICE&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Require AWS X-Ray SDK&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;AWSXRay&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws-xray-sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="nx"&gt;AWSXRay&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;captureHTTPsGlobal&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;http&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="c1"&gt;// Use AWS X-Ray middleware&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;AWSXRay&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;openSegment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;MyApp&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/hello&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;seg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;xray&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getSegment&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="nx"&gt;seg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addAnnotation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hello-microservice&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;serviceName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;seg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addMetadata&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Request Meta&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;req&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nx"&gt;res&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello AWS X-Ray!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Close the X-Ray segment for the current request&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;AWSXRay&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;express&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;closeSegment&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`App listening at http://localhost:&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;port&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure to install &lt;code&gt;aws-xray-sdk&lt;/code&gt; before, when setting up instrumentation in nodejs app.&lt;/p&gt;

&lt;h2&gt;
  
  
  X-Ray Daemon
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/aws/aws-xray-daemon" rel="noopener noreferrer"&gt;AWS X-Ray daemon&lt;/a&gt; is a core software application that listens for traffic on UDP port 2000. It gathers raw segment data and relays it to the AWS X-Ray API. The daemon needs to work in conjunction with the AWS X-Ray SDKs and must be running so that the data sent by the SDKs can reach the X-Ray service. The X-Ray daemon is an open-source project that you can follow on GitHub. See more details of the AWS X-Ray daemon &lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Segments
&lt;/h2&gt;

&lt;p&gt;AWS X-Ray receives service data in &lt;strong&gt;segments&lt;/strong&gt;. XRay then groups segments that have a common request into traces. The resources running your application logic send data about their work as segments. Technically, it's an object that contains some metadata about the request including,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The host - a hostname, alias, or IP address&lt;/li&gt;
&lt;li&gt;The request – method, client address, path, user agent&lt;/li&gt;
&lt;li&gt;The response – status, content&lt;/li&gt;
&lt;li&gt;The work done – start and end times, subsegments&lt;/li&gt;
&lt;li&gt;Issues that occur – errors, faults, and exceptions, including automatic capture of exception stacks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhq381i9txjd3axe37k1t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhq381i9txjd3axe37k1t.png" alt="xray-segments" width="800" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Subsegments
&lt;/h2&gt;

&lt;p&gt;To better monitor the work done by your application, you can use subsegments to break down the data into smaller pieces. Subsegments provide detailed timing information about downstream calls made by your application to complete the original request. They also contain additional details about calls to external services, such as AWS, HTTP APIs, or SQL databases. Furthermore, you can define custom subsegments to instrument-specific functions or lines of code within your application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxdqoy7uqf4k692w30oa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxdqoy7uqf4k692w30oa.png" alt="xray-subsegments" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Traces
&lt;/h2&gt;

&lt;p&gt;AWS X-Ray can help you trace requests as they move across various services. It captures important information about the path, duration, and performance of each request. A unique ID, called a &lt;strong&gt;Trace ID&lt;/strong&gt;, is used to track the path of a request through your application. A trace is essentially a collection of all the segments generated by a single request. This request is usually an HTTP GET or POST request that passes through a load balancer, hits your application code, and generates downstream calls to other AWS services or external web APIs.&lt;/p&gt;

&lt;p&gt;A trace ID and a sampling decision are added to HTTP requests in tracing headers named X-Amzn-Trace-Id. The first X-Ray-integrated service that receives the request adds the tracing header, which is then read by the X-Ray SDK and included in the response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;X-Amzn-Trace-Id: Root=1-5759e988-bd862e3fe1be46a994272793;Sampled=1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Service Graph
&lt;/h2&gt;

&lt;p&gt;AWS X-Ray sends data about your app's services to create a graph that shows all the resources and services your app consists of. This graph is a JSON document that contains important information about your app's components. By using this graph, X-Ray can create a visual map of your app's pieces. This map helps you see how everything works together.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faijsec6htfaoff7v9lqj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faijsec6htfaoff7v9lqj.png" alt="service-graph" width="700" height="691"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  X-Ray Cost
&lt;/h2&gt;

&lt;p&gt;AWS X-Ray doesn't charge upfront fees or commitment. you only pay for what you use based on the number of traces, and segments recorded and retrieved. For the Free tier, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first 100,000 traces recorded each month are free.&lt;/li&gt;
&lt;li&gt;The first 1,000,000 traces retrieved or scanned each month are free.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After the free tier or specific usage in the free tier, this is how this service charges you, which I think is not that expensive.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Traces recorded cost $5.00 per 1 million traces recorded ($0.000005 per trace)&lt;/li&gt;
&lt;li&gt;Traces retrieved cost $0.50 per 1 million traces retrieved ($0.0000005 per trace).&lt;/li&gt;
&lt;li&gt;Traces scanned cost $0.50 per 1 million traces scanned ($0.0000005 per trace).&lt;/li&gt;
&lt;li&gt;X-Ray Insights traces stored costs $1.00 per million traces recorded ($0.000001 per trace).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;AWS X-Ray enables customers to choose their sampling rate. Customers considering AWS X-Ray may want to estimate their costs for recorded traces by multiplying their request or API call rate by the chosen sampling rate. Read more &lt;a href="https://aws.amazon.com/xray/pricing/" rel="noopener noreferrer"&gt;here&lt;/a&gt; about the specific regions and their costs.&lt;/p&gt;

&lt;p&gt;Earlier I mentioned that this service works smoothly with distributed systems or microservices in general. This is a microservices-based project on nodejs that is available on &lt;a href="https://github.com/cloudacademy/aws-xray-microservices-calc" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; that is provided by &lt;a href="https://cloudacademy.com/" rel="noopener noreferrer"&gt;CloudAcademy&lt;/a&gt;. You can clone and do hands-on with X-Ray in a real microservices-based application. &lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Let's recap what we have learned in this article. We learned what AWS X-Ray is and how we can use it with our applications irrespective of its architecture. AWS X-Ray is a service that helps you monitor and optimize your applications. It tracks requests, identifies issues, and provides insights into complex service relationships. You can trace message pathways and call stacks at scale, ensuring everything is running smoothly. &lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;p&gt;Here are the resources to get you started with AWS X-Ray.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/31676d37-bbe9-4992-9cd1-ceae13c5116c/en-US" rel="noopener noreferrer"&gt;AWS X-Ray hands-on workshop&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/xray-nodejs.html" rel="noopener noreferrer"&gt;AWS X-Ray with Nodejs&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/xray-go.html" rel="noopener noreferrer"&gt;AWS X-Ray with Go&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/xray-troubleshooting.html" rel="noopener noreferrer"&gt;AWS X-Ray troubleshooting&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/xray/pricing/" rel="noopener noreferrer"&gt;AWS X-Ray Costing&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;if you like this article, please like and share it with your cloud friends and follow me on &lt;a href="https://www.linkedin.com/in/mquanit/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;, &lt;a href="https://github.com/Mohammad-Quanit" rel="noopener noreferrer"&gt;Github&lt;/a&gt;, &lt;a href="https://twitter.com/mquanit" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Peace ✌🏻&lt;/p&gt;

</description>
      <category>monitoring</category>
      <category>javascript</category>
      <category>node</category>
      <category>aws</category>
    </item>
    <item>
      <title>Navigate Your Containerized Apps to Success with AWS Copilot</title>
      <dc:creator>Mohammad Quanit</dc:creator>
      <pubDate>Mon, 05 Feb 2024 14:45:13 +0000</pubDate>
      <link>https://dev.to/aws-builders/navigate-your-containerized-apps-to-success-with-aws-copilot-3ml2</link>
      <guid>https://dev.to/aws-builders/navigate-your-containerized-apps-to-success-with-aws-copilot-3ml2</guid>
      <description>&lt;p&gt;Hello Engineers, In this article I am going to explore a CLI tool that will be a game changer for containerized applications and make it easy to deploy on Amazon Web Services. Recently I talked about AWS Copilot at the AWS Hungary Conference and shared my insights on this technology developed by the AWS Containers team.&lt;/p&gt;



&lt;p&gt;I will be doing hands-on on this, so you can practice within your environment as you are reading this.&lt;/p&gt;



&lt;h2&gt;
  
  
  What is AWS Copilot
&lt;/h2&gt;

&lt;p&gt;AWS Copilot is a Command Line Interface (CLI) tool that simplifies the process of deploying and operating containerized applications on AWS. It automates the creation of necessary infrastructure for running applications, such as load balancers, Amazon Elastic Container Registry, Amazon Elastic Container Service, and IAM roles. Copilot also manages your applications' deployment and operational tasks, allowing you to focus on writing code. With AWS Copilot, you can easily create, deploy, and manage your containerized applications on AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ufny4506npbgjv3skup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ufny4506npbgjv3skup.png" alt="AWS Copilot Workflow" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Copilot handles infrastructure provisioning and management for you where you don't manage EC2 instances. You define your application using simple &lt;strong&gt;YAML manifest files&lt;/strong&gt; Copilot automatically creates services, load balancers, pipelines, etc.&lt;/p&gt;



&lt;h2&gt;
  
  
  Why AWS Copilot
&lt;/h2&gt;

&lt;p&gt;Before &lt;em&gt;AWS Copilot&lt;/em&gt;, To deploy and work with containerized applications engineers used AWS Elastic Container Service (ECS) which is a highly scalable and efficient container management service provided by Amazon Web Services (AWS). It allows you to run, manage, and scale containerized applications on a cluster of virtual machines within the AWS ecosystem. You have full control over infrastructure along with managing EC2 instances and registering them with the ECS cluster. You define task definitions that describe your containerized applications and might need to create services, load balancers, etc manually. &lt;em&gt;Copilot&lt;/em&gt; makes it super easy to set up and deploy your containers on AWS - but getting started is only the first step of the journey. &lt;/p&gt;

&lt;p&gt;You now understand the &lt;em&gt;AWS copilot&lt;/em&gt; and its benefits, let's make our hands dirty by doing hands-on&lt;/p&gt;



&lt;h2&gt;
  
  
  Install AWS Copilot
&lt;/h2&gt;

&lt;p&gt;To start playing with AWS Copilot, you first need to install it on your local machine. It ships as a separate package or software to one command install.&lt;/p&gt;

&lt;p&gt;Below are commands to install it on different platforms&lt;/p&gt;

&lt;p&gt;&lt;u&gt;Linux:&lt;/u&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -Lo copilot https://github.com/aws/copilot-cli/releases/latest/download/copilot-linux

chmod +x copilot &amp;amp;&amp;amp; sudo mv copilot /usr/local/bin/copilot &amp;amp;&amp;amp; copilot --help
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;u&gt;MacOS:&lt;/u&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -Lo copilot https://github.com/aws/copilot-cli/releases/latest/download/copilot-darwin

chmod +x copilot &amp;amp;&amp;amp; sudo mv copilot /usr/local/bin/copilot &amp;amp;&amp;amp; copilot --help
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or you can use the &lt;u&gt;brew package manager&lt;/u&gt; to install:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;brew install aws/tap/copilot-cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Initialized AWS Copilot for the Project
&lt;/h2&gt;

&lt;p&gt;You can deploy any project on ECS by the copilot. If you don't have any project right now, here's a &lt;a href="https://github.com/aws-samples/aws-copilot-sample-service" rel="noopener noreferrer"&gt;repo&lt;/a&gt; where you can practice all the stuff we are about to learn. After Installing the AWS Copilot and cloning the project if you don't have one, Go to the project that you want to deploy, and within your code directory run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;copilot init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next thing we will do is answer a few questions from Copilot. Copilot will use these questions to help us choose the best AWS infrastructure for your service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97za3vxjik7nvt06u9xl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97za3vxjik7nvt06u9xl.png" alt="Copilot Questions" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once Copilot finishes setting up the infrastructure to manage your app, you’ll be asked if you want to deploy your service to a test environment type &lt;strong&gt;yes&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43ohlzxpec0g3vjo6hcf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43ohlzxpec0g3vjo6hcf.png" alt="Copilot Deploy Process" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can wait a few minutes ⏳ while Copilot sets up all the resources needed to run your service. After all the infrastructure for your service is set up, Copilot will build your image push it to Amazon ECR, and start deploying to Amazon ECS on AWS Fargate.&lt;/p&gt;

&lt;p&gt;After your deployment completes, your service will be up and running and Copilot will print a link to the URL 🎉!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpg3oxh200zje4zbn7odz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpg3oxh200zje4zbn7odz.png" alt="Copilot Public Link" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Copilot will handle the deployment process, creating AWS resources, and configuring the necessary infrastructure. Under the hood, Copilot bootstraps these AWS resources when setting up your application.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ECS Cluster&lt;/li&gt;
&lt;li&gt;ECS Service&lt;/li&gt;
&lt;li&gt;Fargate Tasks&lt;/li&gt;
&lt;li&gt;ECR Repo&lt;/li&gt;
&lt;li&gt;VPC&lt;/li&gt;
&lt;li&gt;Subnets&lt;/li&gt;
&lt;li&gt;Security Groups&lt;/li&gt;
&lt;li&gt;Load Balancer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After the deployment of our application using Copilot, it creates a cluster on ECS where you can see or manage it from the AWS Console. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyp6jqhpc7yi09pg5k3gu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyp6jqhpc7yi09pg5k3gu.png" alt="ECS Cluster in Console" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This also creates a Repository on ECR with the image provided from the project's docker file. &lt;/p&gt;

&lt;p&gt;When working with AWS Copilot, there are some concepts that we should be aware of. Let's look into this one by one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manifest Files
&lt;/h2&gt;

&lt;p&gt;After setting up and deploying your copilot application, in your project, you will see a new folder will be created named &lt;strong&gt;copilot&lt;/strong&gt; contains some manifest files for the application services and environments. This &lt;strong&gt;manifest.yml&lt;/strong&gt; file contains metadata that describes a service's, a job's, or an environment's architecture as infrastructure-as-code. It is a file generated from &lt;code&gt;copilot init&lt;/code&gt;, &lt;code&gt;copilot svc init&lt;/code&gt;, &lt;code&gt;copilot job init&lt;/code&gt;, or &lt;code&gt;copilot env init&lt;/code&gt; that gets converted to an AWS CloudFormation template. Manifest files are always stored under copilot//manifest.yml.&lt;/p&gt;

&lt;h2&gt;
  
  
  Application
&lt;/h2&gt;

&lt;p&gt;An &lt;em&gt;application&lt;/em&gt; refers to a top-level group composed of services, environments, and pipelines that are related to one another. It enables users to organize their services into an application, irrespective of whether it is composed of a single service that performs all functions or a constellation of microservices. The tool categorizes the services and environments into which they can be deployed, thereby creating an efficient and structured approach to service management.&lt;/p&gt;



&lt;p&gt;You can see your deployed app details using the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;copilot app show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will see all your app details on the terminal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58am37nf6e3ioqd41s9l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58am37nf6e3ioqd41s9l.png" alt="Copilot App Detials" width="697" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below are additional Copilot App Commands that are self-explanatory:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5vh8fcssbjkhnboppce.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5vh8fcssbjkhnboppce.png" alt="Copilot App commands" width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Environments
&lt;/h2&gt;

&lt;p&gt;When you work with applications, you might need to create different versions of a service for different environments. For example, you might want to create a development environment (dev) and a production environment (prod). To make this process easier, you can use the &lt;em&gt;copilot init&lt;/em&gt; command. This command allows you to create a test environment that contains all the AWS resources needed to provision a secure network (VPC, subnets, security groups, etc.), as well as other resources that can be shared between multiple services. These resources include an Application Load Balancer or an ECS Cluster. When you deploy your service into your test environment, it will use the test environment's network and resources. Your application can have multiple environments, and each one will have its networking and shared resources infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhm56tfcv0xjbtbyfbu9j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhm56tfcv0xjbtbyfbu9j.png" alt="Copilot Env Command" width="727" height="547"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above image, you can see the environment for the application we deployed earlier named &lt;em&gt;&lt;u&gt;Stage&lt;/u&gt;&lt;/em&gt;. We can have multiple environments according to our scenarios. &lt;/p&gt;

&lt;p&gt;Let's create another environment for our app and name it prod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;copilot env init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running this command, you'll see logs something like the below and you can verify your env by checking into this folder &lt;code&gt;copilot/environments/prod/manifest.yml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fin5m3e3xzcd3bh5spr2b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fin5m3e3xzcd3bh5spr2b.png" alt="Copilot prod env" width="800" height="223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Service
&lt;/h2&gt;

&lt;p&gt;Service is your actual application code and infrastructure&lt;br&gt;
needed to get it up and running on AWS. Remember when we got asked by Copilot after running &lt;code&gt;copilot init&lt;/code&gt; about our service name and type where we selected &lt;code&gt;Load Balanced Web Service&lt;/code&gt;? AWS has some pre-defined services to cater to almost every use case. As for our application, we wanted to deploy it on Amazon Load-Balancer so we selected that service type.&lt;/p&gt;
&lt;h3&gt;
  
  
  Public Facing Internet Service
&lt;/h3&gt;

&lt;p&gt;If you want your service to serve public internet traffic, you have three options:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Request-Driven Web Service - will provision an AWS App Runner Service to run your service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Static Site -  will provision a dedicated CloudFront distribution and S3 bucket for your static website.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Load Balanced Web Service - will provision an Application Load Balancer, a Network Load Balancer, or both, along with security groups, and an ECS service on Fargate to run your service.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Backend Service
&lt;/h3&gt;

&lt;p&gt;If you require a service that is only accessible from within your application and cannot be accessed externally, you can create a &lt;em&gt;Backend Service&lt;/em&gt;. Copilot will set up an ECS Service that runs on AWS Fargate, but it won't configure any endpoints that are accessible from the internet.&lt;/p&gt;
&lt;h3&gt;
  
  
  Worker Service
&lt;/h3&gt;

&lt;p&gt;Worker Service enables your application's microservices to communicate with each other asynchronously using a pub/sub architecture. By publishing events to Amazon SNS topics, your microservices can share information with a Worker Service that consumes these events. This allows for a more efficient and scalable application architecture, where each microservice can focus on its specific task without being burdened by the need to directly communicate with other services. With Worker Services, your application can achieve high performance and reliability while maintaining flexibility and modularity.&lt;/p&gt;

&lt;p&gt;Run this command to see what services are we using for our app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;copilot svc show
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F486rlp6itzmj9s4mjcnn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F486rlp6itzmj9s4mjcnn.png" alt="copilot service" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Job
&lt;/h2&gt;

&lt;p&gt;Amazon Elastic Container Service (ECS) provides Jobs, which are tasks triggered by an event. Copilot simplifies creating and deploying applications on ECS. Copilot supports only Scheduled Jobs, which run at a fixed rate or schedule. &lt;/p&gt;

&lt;p&gt;Select the app and job type, then define the schedule or rate. Copilot automatically creates AWS resources and deploys the task definition to ECS.&lt;/p&gt;

&lt;p&gt;Command to create a job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;copilot job init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or to list existing jobs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;copilot job ls
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Currently, we don't have a job created with our environment or application, you might see a blank list.&lt;/p&gt;

&lt;p&gt;Copilot Jobs offers logs that display the most recent activity of your job. You can track your logs in real time using the &lt;code&gt;--follow&lt;/code&gt; flag. This will show logs from any new instances of your job after you execute the command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;copilot job logs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Sample Logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;copilot/myjob/37236ed Doing some work
copilot/myjob/37236ed Did some work
copilot/myjob/37236ed Exited...
copilot/myjob/123e300 Doing some work
copilot/myjob/123e300 Did some work
copilot/myjob/123e300 Did some additional work
copilot/myjob/123e300 Exited
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now after doing all of the above stuff, I recommend you delete all of your resources in AWS so that you don't get surprised at the end of the month.&lt;/p&gt;

&lt;p&gt;For that just run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;copilot app delete
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It will remove all the resources that have been created by Copilot along with all the services, stacks, environments, applications, etc.&lt;/p&gt;

&lt;p&gt;If you like this article, do like, comment, share, and let me know if you have created something with AWS Copilot. And don't forget to follow me on &lt;a href="https://www.linkedin.com/in/mquanit/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;, &lt;a href="https://github.com/Mohammad-Quanit" rel="noopener noreferrer"&gt;Github&lt;/a&gt;, &lt;a href="https://twitter.com/mquanit" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Peace ✌🏻&lt;/p&gt;

</description>
      <category>containers</category>
      <category>aws</category>
      <category>tutorial</category>
      <category>awscopilot</category>
    </item>
    <item>
      <title>Testing Approaches in Microservices Using Go</title>
      <dc:creator>Mohammad Quanit</dc:creator>
      <pubDate>Sat, 20 Jan 2024 07:48:10 +0000</pubDate>
      <link>https://dev.to/mquanit/testing-approaches-in-microservices-using-go-48g</link>
      <guid>https://dev.to/mquanit/testing-approaches-in-microservices-using-go-48g</guid>
      <description>&lt;p&gt;Every application needs quality testing strategies to function properly and ensure resiliency and reliability for its infrastructure. Microservices are typically small independent services, as a result, they need to be tested independently and specifically for the feature they implement.&lt;/p&gt;

&lt;p&gt;To ensure that microservices written in Go are reliable and functional, a thorough testing strategy must be developed, including unit, integration, and end-to-end testing. It's also important to consider different aspects of microservices architecture.&lt;/p&gt;

&lt;p&gt;By implementing testing strategies and best practices, we can improve the testing process and ensure that the microservices are working as well as they can. Let's discuss some testing strategies and best practices for microservices written in Go.&lt;/p&gt;



&lt;h2&gt;
  
  
  Unit testing
&lt;/h2&gt;

&lt;p&gt;Testing a single unit or small piece of code is called Unit Test. As we know, microservices are small independent, and isolated services that are supposed to do one single operation, engineers must write test cases for those services. Ideally, a Unit test is quite voluminous and is internal to the microservice. It should be an automated process and depends on the development framework within the service.&lt;/p&gt;

&lt;p&gt;Unlike monolith where a whole application is combined in a single unit, Microservices take one service as a feature or application and it is easier to test them in unit testing. Since there will be a separate microservice single business function, developers and quality engineers can achieve the utmost accuracy in software along with massive cost reduction compared to reworking and buggy applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Unit testing in Go
&lt;/h3&gt;

&lt;p&gt;To ensure that your Go microservices are functioning correctly, start by creating unit tests for each function and method. The Go testing framework can be used to generate test cases and assertions. If a function interacts with external services or has dependencies, it's recommended to use mocking libraries or create mock objects to isolate the unit being tested. It's important to cover edge cases, error handling, and boundary conditions in the unit tests to ensure your microservices are robust.&lt;/p&gt;

&lt;p&gt;Below are some popular libraries for testing and mocking in Go:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;gomock&lt;/code&gt; - Mocking framework for the Go programming language.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;assert&lt;/code&gt; - Basic Assertion Library used alongside native go testing, with building blocks for custom assertions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;gomega&lt;/code&gt; - Rspec like matcher/assertion library.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Testify&lt;/code&gt; - Toolkit for mocks, and assertions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ginkgo&lt;/code&gt; - BDD (Behavior Driven Development) Testing Framework for Go.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is a simple example of unit testing in Go via the native &lt;code&gt;testing&lt;/code&gt; package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;package&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"testing"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;example&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;flag&lt;/span&gt;    &lt;span class="kt"&gt;bool&lt;/span&gt;
    &lt;span class="n"&gt;counter&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;
    &lt;span class="n"&gt;pi&lt;/span&gt;      &lt;span class="kt"&gt;float64&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;TestExampleStructCreation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;s1&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;example&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;flag&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;    &lt;span class="no"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;counter&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;pi&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;      &lt;span class="m"&gt;3.141592&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;s1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;flag&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Expected flag to be true"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;s1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;counter&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Expected counter to be 10, but got %d"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;s1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;counter&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;s1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pi&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="m"&gt;3.141592&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Expected pi to be 3.141592, but got %f"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;s1&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pi&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;h2&gt;
  
  
  Integration testing
&lt;/h2&gt;

&lt;p&gt;As a software developer, it's important to test your microservices to ensure that they function properly within your workflow. While unit testing is a helpful way to check the individual functionality of each microservice, it's not enough to guarantee that all services will work together seamlessly. Therefore, it's crucial to test the connected services to ensure a smooth working flow. Integration tests validate that independently developed services work smoothly when connected.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integration testing in Go
&lt;/h3&gt;

&lt;p&gt;Write integration tests to ensure proper communication and interaction between microservices through their APIs or service interfaces. Use test databases for microservices that interact with databases to avoid affecting production data during integration tests.&lt;/p&gt;

&lt;p&gt;Below are some popular libraries for integration testing in Go:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;gnomock&lt;/code&gt; - integration testing with real dependencies (database, cache, even Kubernetes or AWS) running in Docker, without mocks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;go-hit&lt;/code&gt; - Hit is an HTTP integration test framework written in Golang.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;go-mysql-test-container&lt;/code&gt; - Golang MySQL test-container to help with MySQL integration testing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;



&lt;h2&gt;
  
  
  End-to-end testing
&lt;/h2&gt;

&lt;p&gt;End-to-end or E2E testing is one the most important strategies to test your microservices workflow from start to end to verify the user's journey. These tests can be automated and only done for ultra-critical flows. Unlike, unit or integration testing, it is quite hard to do testing E2E frequently because it requires all microservices to spin up which could be difficult to maintain and automate. As a result, it’s reserved for testing only critical interactions between specific microservices.&lt;/p&gt;

&lt;p&gt;When it comes to microservices, problems can arise at various levels, making it a complex system. Even if each service has been thoroughly unit tested, if they cannot communicate with each other, it will not meet the user's expectations. To ensure realistic testing, it's important to create dedicated test environments that closely resemble the production environment. Additionally, implementing end-to-end tests that can be automated and added to your CI/CD pipeline will allow for continuous validation. The more you automate these test flows, the better it will be to manage and fix errors without shipping them into production.&lt;/p&gt;

&lt;h3&gt;
  
  
  End-to-end testing in Go
&lt;/h3&gt;

&lt;p&gt;For thorough testing of the Go microservices stack, it is crucial to carry out end-to-end tests that cover all the interactions between the microservices. It is highly recommended to establish dedicated test environments that closely resemble the production environment to ensure accurate testing. These end-to-end tests should be automated and integrated into the CI/CD pipeline to facilitate continuous validation. By following these best practices, you can ensure the reliability and efficiency of your microservices stack.&lt;/p&gt;

&lt;p&gt;Below are some popular libraries for integration testing in Go:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;httpexpect&lt;/code&gt; - Concise, declarative, and easy-to-use end-to-end HTTP and REST API testing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;baloo&lt;/code&gt; - Expressive and versatile end-to-end HTTP API testing made easy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;endly&lt;/code&gt; - Declarative end-to-end functional testing.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;



&lt;h2&gt;
  
  
  Performance testing
&lt;/h2&gt;

&lt;p&gt;Performance testing is a non-functional testing strategy to measure the speed, responsiveness, and stability of a system under a particular workload. In general, it focuses on infrastructure resources such as CPU, GPU, Memory, Network, etc. Performance testing is crucial when running an overwhelming amount of load on your services to find bottlenecks or inability. Most of the performance tests can be achieved by load/stress testing along with benchmarking.&lt;/p&gt;

&lt;p&gt;Load or Stress Testing is a type of performance testing used to determine the system's behavior under normal and peak conditions. The goal is to make sure that the application works smoothly under heavy loads when many users access it at the same time. When it comes to testing services, focusing on a single one can be akin to testing the entire stack. While this can be a challenging endeavor that requires careful automation, it is ultimately worth the effort, particularly when dealing with resource-intensive or problematic services.&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance testing in Go
&lt;/h3&gt;

&lt;p&gt;In Go microservices, it is crucial to perform load testing to evaluate how well your microservices handle concurrent requests and high-traffic loads. In many program languages, a Benchmark is a commonly used tool for measuring the execution time of code. Use Go's built-in benchmarking tool provided by the testing package to measure the performance of critical parts of your code and identify bottlenecks. In the &lt;code&gt;xx_test.go&lt;/code&gt; file, we just need to add the function name starting with Benchmark like &lt;code&gt;func BenchmarkXX()&lt;/code&gt;. When running the command go test -bench=. , it will trigger the benchmark function.&lt;/p&gt;

&lt;p&gt;Below are some popular tools for performance testing in Go:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;mockingjay&lt;/code&gt; - Fake server, Consumer Driven Contracts, and help with testing performance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Apache JMeter&lt;/code&gt; -  Test performance both on static and dynamic resources, Web dynamic applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;K6.io&lt;/code&gt; - A modern load-testing tool, using Go and JavaScript.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;



&lt;h2&gt;
  
  
  Security testing
&lt;/h2&gt;

&lt;p&gt;Security is a crucial aspect of your infrastructure but most of the time engineers do not give real attention to it. The first step is to find the scope and boundaries when considering security testing within microservices. Microservice architecture has a lot of benefits such as scalability, resiliency, and flexibility but it also poses some challenges when it comes to security as each microservice may have its vulnerabilities, dependencies, and communication protocols. Developers and security engineers need to assess the risks and threats that microservices might contain. This requires security to be considered on every layer of your microservice application starting with infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Threat Modeling&lt;/strong&gt; is a process that defines the flow of the system or microservices and helps to identify all the points of attack that hackers could exploit. Engineers need to use tools like OWASP Threat Dragon, Microsoft Threat Modeling Tool, or NIST Cybersecurity Framework to conduct a systematic and structured risk assessment. &lt;strong&gt;OWASP's Top 10 security&lt;/strong&gt; principles are a great way to assess vulnerabilities and risks within your microservices.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security testing in Go
&lt;/h3&gt;

&lt;p&gt;When working with microservices developed in Golang, it is no exception that we need to use security tools. We should use static analysis tools like GoSec to scan your codebase for security vulnerabilities. This tool inspects source code for security problems by scanning the Go AST. Security engineers also need to conduct penetration testing to identify vulnerabilities in your microservices, APIs, and endpoints. There are tools already available like Burp Suite, OWASP ZAP, or Nmap, to perform penetration testing, fuzzing, and injection attacks on the web application and its microservices&lt;/p&gt;

&lt;p&gt;Below are some popular tools for security and testing in Go:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;gosec&lt;/code&gt; - Security Checker that inspects source code for security problems by scanning the Go AST.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;secret&lt;/code&gt; - Prevent your secrets from leaking into logs, std*, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;secureio&lt;/code&gt; - An keyexchanging+authenticating+encrypting wrapper and multiplexer for io.ReadWriteCloser based on XChaCha20-poly1305, ECDH and ED25519.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Coraza&lt;/code&gt; - Enterprise-ready, mod security, and OWASP CRS compatible WAF library&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you like my article, please like and share feedback and let me know in the comments. And don't forget to follow me on &lt;a href="https://www.linkedin.com/in/mquanit/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;, &lt;a href="https://github.com/Mohammad-Quanit" rel="noopener noreferrer"&gt;Github&lt;/a&gt;, &lt;a href="https://twitter.com/mquanit" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Peace ✌🏻&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>go</category>
      <category>testing</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Deployment approaches in Microservices.</title>
      <dc:creator>Mohammad Quanit</dc:creator>
      <pubDate>Tue, 16 Jan 2024 08:55:23 +0000</pubDate>
      <link>https://dev.to/mquanit/deployment-approaches-in-microservices-37pb</link>
      <guid>https://dev.to/mquanit/deployment-approaches-in-microservices-37pb</guid>
      <description>&lt;p&gt;Deploying monolith applications usually means running one or more servers of a usually large single chunk of application. The deployment of a monolith might not always be a straightforward process but it is much simpler than deploying microservices.&lt;/p&gt;

&lt;p&gt;Microservice applications can consist of tens or hundreds of interconnected services written in a variety of different programming languages and frameworks. Each microservice is a mini-application with its own resources, scaling, and deployment and you need to run several instances of a single microservice to scale.&lt;/p&gt;

&lt;p&gt;For example, Let's say you have an e-commerce application consisting of some microservices that could be Catalog, Cart, Search, Payment, etc. Now you need to deploy each of these services separately and each service can required to run on more than one instance to achieve scalability for that specific service.&lt;/p&gt;

&lt;p&gt;Deployment of microservices written in Golang requires careful planning and consideration of various deployment strategies. These strategies help ensure that your microservices are reliable, scalable, and can be efficiently managed in a production environment. Here are some deployment strategies and practices for microservices with Golang:&lt;/p&gt;

&lt;h2&gt;
  
  
  Containerization
&lt;/h2&gt;

&lt;p&gt;Containerization is a technique where you build, test, and deploy your application in an isolated manner without interfering with other services. Tools like Docker, LXD, and Podman are used for containerizing &amp;amp; deploying microservices. Each microservice is packaged as a lightweight container along with its dependencies, making it consistent and portable across different environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker&lt;/strong&gt; is one of the most popular and widely used container engines that almost all organizations prefer to use. It is highly configurable and developer friendly which makes it an automatic choice for building containers. We are going to learn Docker or Containerization in general in detail in the next module. &lt;br&gt;
Below is an example Dockerfile for Golang:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="c"&gt;# syntax=docker/dockerfile:1&lt;/span&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; golang:1.21&lt;/span&gt;

&lt;span class="c"&gt;# Set destination for COPY&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="c"&gt;# Download Go modules&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; go.mod go.sum ./&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;go mod download

&lt;span class="c"&gt;# Copy the source code. Note the slash at the end, as explained in&lt;/span&gt;
&lt;span class="c"&gt;# https://docs.docker.com/engine/reference/builder/#copy&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; *.go ./&lt;/span&gt;

&lt;span class="c"&gt;# Build&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="nv"&gt;CGO_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0 &lt;span class="nv"&gt;GOOS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;linux go build &lt;span class="nt"&gt;-o&lt;/span&gt; /web-server

&lt;span class="c"&gt;# Optional:&lt;/span&gt;
&lt;span class="c"&gt;# To bind to a TCP port, runtime parameters must be supplied to the docker command.&lt;/span&gt;
&lt;span class="c"&gt;# But we can document in the Dockerfile what ports&lt;/span&gt;
&lt;span class="c"&gt;# the application is going to listen on by default.&lt;/span&gt;
&lt;span class="c"&gt;# https://docs.docker.com/engine/reference/builder/#expose&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8080&lt;/span&gt;

&lt;span class="c"&gt;# Run&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["/web-server"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Orchestration
&lt;/h2&gt;

&lt;p&gt;When working with containerization using docker or any other tool, we know that there are many microservices that we need to containerize and deploy. Still, the question is to deploy where and how to manage tens or hundreds of containers as we scale up. Orchestration is a technique that handles containers as many as you have in an automated way. Platforms like Kubernetes &amp;amp; Docker Swarm are widely used for managing containerized microservices. &lt;/p&gt;

&lt;p&gt;Kubernetes or K8s in short is the most popular container orchestration tool created by Google that most organizations prefer. Docker Swarm is also used for managing containers that are introduced by Docker Inc. itself, lacks some of the features that Kubernetes has, but both these tools provide enough features like scaling, load balancing, service discovery, and rolling updates, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Blue-Green Deployment
&lt;/h2&gt;

&lt;p&gt;The Blue-Green deployments technique in general, is a pattern that applies to both monolith and microservices architecture that strategically involves running two identical environments for your infrastructure. The primary goal of this deployment is to minimize downtime when switching from "blue" (current) and "green" (new) infrastructure. When a new version of a microservice is ready, traffic is switched from blue to green, allowing for easy rollbacks if issues arise. The two environments need to be kept separate but still look as similar as possible. These environments can be made up of different hardware or virtual machines that could be on the same or different hardware.&lt;/p&gt;

&lt;p&gt;Blue-Green Deployment improves high availability by keeping microservices available during development and deployment. There will be no downtime because there is already a similar version of your microservices running simultaneously along with the stable one that serves the incoming traffic, so if somehow the stable version crashes or gets unstable the other identical environment will handle traffic. Another benefit is that if the new version isn't working correctly, you can quickly roll back to the previous variation (the blue microservice). Microservices are constantly monitored to track if any issue arises, it should be reverted to the blue state. This technique is also known as Red Black Deployment, a newer term being used by Netflix, Istio, and other frameworks/platforms that support container orchestration This strategy or technique is subtly but powerfully different than Blue-Green Deployment.&lt;/p&gt;

&lt;p&gt;The only difference between them is Blue-Green deployments, both versions can get incoming requests at the same time but in Red-Black deployments, only one of the versions is getting traffic at any point in time. &lt;/p&gt;

&lt;h2&gt;
  
  
  Canary Deployment
&lt;/h2&gt;

&lt;p&gt;Canary Deployment is one of the most popular strategies for deploying the application infrastructure. Like Blue/Green deployment, this technique can also be used with monolith and microservice architectures. It is better to transition slowly from blue to green instead of doing it suddenly. &lt;br&gt;
In canary deployment, engineers deploy the new features or changes gradually in stages and the goal is to show that new feature or change to a specific set of users. This includes releasing the new version of services to a small percentage of the load first and seeing if it works as expected. The canary deployment releases only a single microservice at a time and microservices with higher criticality and risks involved can be made available before others.&lt;br&gt;
To ensure a microservice is thoroughly tested with real users before launch, engineers can use Canary Deployment. This method compares different service versions and reduces downtimes while improving availability. Detecting issues early on prevents critical microservices from being compromised and keeps the entire system safe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rolling Deployment
&lt;/h2&gt;

&lt;p&gt;Rolling deployment updates microservices one at a time while keeping the others running. In this strategy, an application's new version periodically replaces the old version of the application &amp;amp; deployments happen over a while. Implementing rolling deployments in your application deployment process can significantly increase high availability and reduce the risk of service disruptions. With this approach, multiple environments are up almost all the time, ensuring that your users can access your application without any interruptions. As the newer version of the application takes up complete charge, the old version gets released, allowing for a seamless transition. This ensures that your users experience minimal downtime and can continue to use your application without any disruption.&lt;/p&gt;

&lt;p&gt;Rolling deployments allow deployments incrementally which helps reduce the downtime and reliability of the application by reducing the risk of widespread failures. SREs and DevOps engineers gradually update the server and continuously monitor it, so if there is any issue arises it can be detected early and resolved before the whole system affected. &lt;br&gt;
Rolling deployments are a useful way to simplify the process of fixing issues that may occur during deployment. This strategy updates the system incrementally, so if there are any problems, only the updated servers need to be rolled back instead of the entire system. This gives developers and administrators more control and flexibility to manage the system's integrity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless Deployment
&lt;/h2&gt;

&lt;p&gt;Serverless unlike its name means application resources are deployed and hosted on some servers where engineers don't need to care about anything related to infrastructure. Platforms that the Cloud companies provide like AWS Lambda, Google Cloud Functions, and Azure Functions are serverless platforms that provide all the resources you need to run a microservice with a Pay-As-You-Go model. Serverless Microservices contain cloud functions that perform highly specific roles within an application. These cloud functions automatically scale based on demand with only paying as much as you use to run those functions.&lt;/p&gt;

&lt;p&gt;Serverless Microservices contain serverless functions that are small blocks of code running in response to an incoming request to that microservice. We discussed that microservices are also small independent services that can be scaled and managed independently of one another, so how does serverless fit in this situation? &lt;/p&gt;

&lt;p&gt;Just like we can run the microservices in a container platform like docker separately from each other, we can write a single function for each microservice that is running on some cloud vendor without having to manage any overhead. A single microservice can have multiple functions deployed at the same time.  Cloud providers offer a valuable solution for developers by handling the infrastructure, allowing them to focus their efforts on coding. This enables a more efficient workflow and streamlined development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automation &amp;amp; Security Considerations
&lt;/h2&gt;

&lt;p&gt;Automation processes have become a crucial aspect of software delivery especially when building cloud-native applications. Automation in deployment is the process of automating the workflow from developing to testing &amp;amp; the deployment of every single microservice. This process is reliable and effective across SDLC (Software Development Lifecycle). The goal of making the deployment process automated is to eliminate the challenges of manual deployments and enhance the quality and pace of releasing microservices.&lt;/p&gt;

&lt;p&gt;DevOps engineers usually manage all sorts of deployments in infrastructure and are responsible for fixing any issues that come up with deployments. There are tons of tools that help DevOps engineers set up automated processes using CI/CD (Continuous Integration/Continuous Deployment) pipelines to automate the deployment process. CI/CD tools like Jenkins, Travis CI, or GitLab CI/CD can help automate testing, building, and deploying microservices. Deployment automation can also help engineers get quick feedback because it is less error-prone and can be released at a higher frequency, so you get your feedback immediately.&lt;/p&gt;

&lt;p&gt;When designing and working with microservices, security is another thing to consider on levels of the infrastructure. We discussed security testing in the last module, It is important to implement security best practices at every stage of deployment, including secure communication, access control, and vulnerability scanning. When considering secure communication, always try to use HTTPS for secure communication between services. &lt;/p&gt;

&lt;p&gt;As an engineer who is responsible for the deployment, you should know about phishing and credential stuffing. But it's also important to be on the lookout for attacks that come from within your network. To ensure your network stays safe, it's a good idea to use HTTPS for your microservices architecture. When using dependencies create automated workflows to detect issues in dependencies that scan your codebase. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://snyk.io/" rel="noopener noreferrer"&gt;Snyk&lt;/a&gt; is one of the most popular tools to work with security stuff and helps you to find vulnerabilities in your not just codebase but infrastructure.&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>deploy</category>
      <category>go</category>
      <category>devops</category>
    </item>
    <item>
      <title>Career Switch from Software Engineering: (discussion)</title>
      <dc:creator>Mohammad Quanit</dc:creator>
      <pubDate>Fri, 24 Nov 2023 11:14:34 +0000</pubDate>
      <link>https://dev.to/mquanit/career-switch-from-software-engineering-discussion-31oi</link>
      <guid>https://dev.to/mquanit/career-switch-from-software-engineering-discussion-31oi</guid>
      <description>&lt;p&gt;Hello everyone, suppose someone has given many interviews for a software engineering job but has not been able to secure one. In such a situation, should they consider switching to other career options? If you have faced this situation, I would appreciate hearing what you did or what you think someone should do in such a scenario.&lt;/p&gt;

</description>
      <category>career</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Crafting Microservices in Go: A Theoretical Guide</title>
      <dc:creator>Mohammad Quanit</dc:creator>
      <pubDate>Thu, 09 Nov 2023 10:42:36 +0000</pubDate>
      <link>https://dev.to/mquanit/crafting-microservices-in-go-a-theoretical-guide-1161</link>
      <guid>https://dev.to/mquanit/crafting-microservices-in-go-a-theoretical-guide-1161</guid>
      <description>&lt;p&gt;In this blog, we are covering the following topics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Establish a system-design team&lt;/li&gt;
&lt;li&gt;Define your architecture&lt;/li&gt;
&lt;li&gt;Using an API gateway&lt;/li&gt;
&lt;li&gt;Service discovery&lt;/li&gt;
&lt;li&gt;Inter-Service communication&lt;/li&gt;
&lt;li&gt;Implement data storage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;One of the most important aspects of building microservices is designing the architecture. This involves deciding on the granularity of the services, defining their APIs, and implementing the services themselves. We will explore each of these steps in detail and provide examples of how they can be accomplished using Go.&lt;/p&gt;

&lt;h2&gt;
  
  
  Establish a system-design team
&lt;/h2&gt;

&lt;p&gt;Creating a microservices system can involve many people and components, which can make it complicated. The final software is the result of everyone's input and decisions. However, making everything work together smoothly can be difficult. That's why it's crucial to have a team responsible for guiding the system's direction and behavior. This team is referred to as the system design team in this model. In this model, the system design team has three core responsibilities:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Design Team Structure&lt;/li&gt;
&lt;li&gt;Establish Standards &amp;amp; Incentives&lt;/li&gt;
&lt;li&gt;Continually Improve the System&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Define your architecture
&lt;/h2&gt;

&lt;p&gt;To design and build microservices, engineers or architects need to determine the boundaries and responsibilities of each microservice. Identify what communication protocols &amp;amp; APIs are going to be used within microservices or what data storage either open-source or commercial database solutions for each service. In the previous section, we learned that the Golang community has developed numerous frameworks and tools for microservices.&lt;/p&gt;

&lt;p&gt;However, it is essential to ensure that the chosen library or framework addresses the current problem and does not create additional challenges in the future. Decide whether you want to use a web framework like Gin, Echo, or Buffalo, or if you prefer to build your microservices from scratch keep in mind frameworks can provide useful tools for handling HTTP requests, routing, and middleware, but they may add some overhead.&lt;/p&gt;




&lt;h2&gt;
  
  
  Using an API gateway
&lt;/h2&gt;

&lt;p&gt;Most of the time when working with microservices, you need to decide how the application client will interact with microservices. With the monolithic application, there is just one set of endpoints that can be replicated with load balancing that distributes traffic among them. In a microservices architecture, however, each microservice exposes a set of what are typically fine-grained endpoints.&lt;/p&gt;

&lt;p&gt;An API Gateway is an implementation that acts as a single entry point for your microservices. The API Gateway handles requests from clients and passes them to your backend services while sending the responses back to the client. It's a middle-level server that sits between the client/internet and your backend microservices. The API Gateway encapsulates the internal system architecture and provides an API that is tailored to each client. API Gateway can have many responsibilities such as authentication, authorization, security, monitoring, load-balancing, rate limiting, caching, request management, and static response handling. Below is the visualization of the API gateway in microservices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmye4cjqami166b6c1eg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftmye4cjqami166b6c1eg.png" alt="API Gateway" width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Golang community has provided many libraries and packages that help engineers write API Gateways without writing them from scratch. Some of them are part of complete microservices frameworks while some are introduced as standalone API Gateway services. Below are some of the API Gateway tools provided by Go Community.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Lura&lt;/strong&gt; - Ultra-performant API Gateway framework with middlewares.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;resgate&lt;/strong&gt; - Realtime API Gateway for building REST, real-time, and RPC APIs, where all clients are synchronized seamlessly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Janus&lt;/strong&gt; - A lightweight API Gateway and Management Platform that enables you to control who accesses your API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Echo middleware&lt;/strong&gt; - Echo doesn't provide an API gateway specifically but its middleware does some of the work that an API Gateway intends to do.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The API Gateway is responsible for request routing and protocol translation. It provides each of the application’s clients with a custom API. The API Gateway can also mask failures in the backend services by returning cached or default data.&lt;/p&gt;




&lt;h2&gt;
  
  
  Service discovery
&lt;/h2&gt;

&lt;p&gt;In a traditional monolith application, you could probably hardwire the locations, but in a modern micro-services architecture, finding the need for locations is a non-trivial problem. API Gateway needs to know the location (IP and Port) of each microservice to which it communicates. Application services have dynamically assigned locations because of autoscaling and upgrades.&lt;/p&gt;

&lt;p&gt;Service Discovery in microservices is a way of locating other services on a network. Service discovery implementations include a central server and clients that connect to the central server. Microservices applications typically run on virtualized or containerized environments, and the instances can change their locations (IP and PORT) dynamically. So there should be a mechanism that determines which microservice to invoke when a request arrives and that's where Service Discovery comes into play. Service Discovery acts as a registry where all addresses of microservices are tracked and these instances have dynamically assigned network paths. Consequently, the API Gateway like any other service client in the system, needs to use the system's service discovery mechanism either server-side discovery or client-side discovery.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Client-Side discovery&lt;/strong&gt; - In this discovery pattern, the client is responsible for figuring out the network locations on available instances and load-balancing requests across them. It queries the service registry (a database of available instances) and then uses a load-balancing algorithm to select one of the available service instances to request. Types of Client-Side Service discoveries are Netflix Eureka, Zookeeper, and Consul.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Server-Side discovery&lt;/strong&gt; - In this discovery pattern, a dedicated load-balancer does all the job of load-balancing. Clients make requests via a router, which queries the service registry and forwards the request to an available instance. In fact, there's no need to write discovery logic separately for each language and framework that the Service Consumer uses. Types of Server-Side Service discoveries are NGNIX and AWS ELB&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Golang community provided tools for implementing service discovery when writing microservices. Here's how you can implement service discovery in Go:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Service-discovery tools&lt;/strong&gt; - To simplify finding and managing services with Go, try using a service discovery platform like Consul, etcd, or Go-Kit. These tools ensure efficient and seamless operation of your Go apps in a distributed architecture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Register services&lt;/strong&gt; - When a Go service starts, it should register itself with the service discovery tool by providing its name, IP address, port, and any other relevant metadata. This registration typically happens during the service's initialization phase.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Discover services&lt;/strong&gt; - When a Go service needs to communicate with another service, it queries the service discovery tool to obtain the location (IP address and port) of the target service and the service discovery tool returns this information to the requesting service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Handle failures and retries&lt;/strong&gt; - In your Go service, write logic to handle failures and retry mechanisms to handle cases where service discovery fails or returns errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitor &amp;amp; maintain&lt;/strong&gt; - Implement health checks to ensure the reliability of services. Continuously monitor the performance of your service discovery solution.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Inter-Service communication
&lt;/h2&gt;

&lt;p&gt;One of the biggest challenges when working with microservices are communication mechanism. Microservices are distributed by nature and they require service communication on the network level. Every microservice has an instance and process, therefore services interact using inter-service protocols such as HTTP, gRPC, and AMQP (message brokers).&lt;/p&gt;

&lt;p&gt;Microservices has a couple of communication styles has determine the direction of interaction.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Synchronous&lt;/strong&gt; - In synchronous communication, the client expects some response after sending a request from the server that might even be blocked while it waits. HTTP or gRPC are the communication protocols that follow the synchronous communication pattern.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asynchronous&lt;/strong&gt; - In asynchronous communication, the client sends a request but doesn't wait for a response it doesn't block while waiting for a response, and the response itself may not be sent immediately. AMQP (Advanced Message Queuing Protocol) is a popular communication protocol that follows an asynchronous communication pattern using a publisher/subscriber model.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ox3v4jyyvosihipx8f1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ox3v4jyyvosihipx8f1.png" alt="sync-async communication" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Implement data storage
&lt;/h2&gt;

&lt;p&gt;Data Storage is an integral part of storing your data permanently. There are several implementation strategies for data storage when designing microservices. The best strategy depends on a number of factors, such as the size and complexity of your application, the type of data you need to store, and your budget. Designing a database is one of the most challenging concerns for your microservices. There are two implementation strategies when designing the data storage solution.&lt;/p&gt;

&lt;p&gt;Within microservices architectures, there are several options for implementing data storage patterns. To provide a better understanding, here are some examples of such patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data partitioning&lt;/strong&gt;: This pattern divides the data into multiple partitions, each of which is stored in a separate database. This can be used to improve scalability and performance, as each microservice can access only the data that it needs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Data replication&lt;/strong&gt;: This pattern replicates the data across multiple databases. This can be used to improve availability and fault tolerance, as the data will still be available even if one of the databases fails.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Event sourcing&lt;/strong&gt;: This pattern stores the history of all changes to the data. This can be used to reconstruct the data state at any point in time, and it can also be used to implement complex business logic.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Golang community provides a lot of databases, relational &amp;amp; non-relational database drivers, cache storage, query builders, and schema migration tools. Golang supports integration with cloud-native data stores such as Amazon RDS, AWS DynamoDB, Azure Cosmos DB, etc. Below are some community-driven data storage that can be utilized in Go microservices.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;mongo-go-driver&lt;/strong&gt; - Official MongoDB driver for the Go language.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;redigo&lt;/strong&gt; - Redigo is a Go client for the Redis database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;go-elasticsearch&lt;/strong&gt; - Official Elasticsearch client for Go.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pq&lt;/strong&gt; - Pure Go Postgres driver for SQL database.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;dynago&lt;/strong&gt; - Simplify working with AWS DynamoDB&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you like my article, please like and share feedback and let me know in the comments. And don't forget to follow me on &lt;a href="https://www.linkedin.com/in/mquanit/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;, &lt;a href="https://github.com/Mohammad-Quanit" rel="noopener noreferrer"&gt;Github&lt;/a&gt;, &lt;a href="https://twitter.com/mquanit" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Peace ✌🏻&lt;/p&gt;

</description>
      <category>microservices</category>
      <category>go</category>
      <category>design</category>
      <category>systemdesign</category>
    </item>
    <item>
      <title>Understanding Cloud Native &amp; its Architecture</title>
      <dc:creator>Mohammad Quanit</dc:creator>
      <pubDate>Fri, 03 Nov 2023 13:33:33 +0000</pubDate>
      <link>https://dev.to/mquanit/understanding-cloud-native-its-architecture-3dd6</link>
      <guid>https://dev.to/mquanit/understanding-cloud-native-its-architecture-3dd6</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;The term Cloud-Native first appeared 10 years ago when Netflix discussed their web application architecture in AWS re: invent talk 2013. At that time, the meaning of Cloud-Native was likely different than it is today. However, there were no clear definitions of this term and it means different things to different people or organizations.&lt;/p&gt;

&lt;p&gt;In this blog, we are going to understand, what Cloud-Native is &amp;amp; what are the promises of Cloud-Native.&lt;/p&gt;

&lt;p&gt;Cloud-Native is a software architecture pattern, which means any kind of application architecture whether it is web, mobile, or desktop completely built on a Cloud platform natively that is more scalable, available &amp;amp; loosely coupled. Before the Cloud Computing era, organizations did their infrastructure on-premise. The IT or system admins have to do all the hardware setup and configure all the services that are required to run the application. The more hardware to manage, the greater will be the cost.&lt;/p&gt;



&lt;blockquote&gt;
&lt;p&gt;Note: Cloud-Native is a pattern or approach to building applications using Cloud platforms and services for high availability, scalability, and modern dynamic environments, making loosely coupled systems.&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;When the Cloud Computing Era boomed in the tech industry, Some notable organizations started adopting the cloud quickly. They designed their architecture on the cloud completely which promises increased agility to ship new features without compromising their availability making it quicker to respond to changing customer demands. All the big giant companies like Google, Amazon, and Microsoft have their own Cloud Platforms that empower organizations to build their application using their platform. So, investing time in Cloud Native will be a good idea.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;When should we adopt the Cloud-Native approach?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The answer is It Depends. If the application is relatively small with monolith architecture a Cloud-Native approach may not be necessary. A simple deployment model may be sufficient. However, for larger and more complex applications, Cloud-Native can offer a wide range of benefits such as increased scalability, faster deployment cycles, and high availability. Ultimately, the decision to adopt a Cloud-Native strategy should be based on a careful evaluation of the application's requirements and the organization's resources.&lt;/p&gt;

&lt;p&gt;You might have an understanding of Cloud-Native, we are going to cover, Cloud-Native architecture along with its benefits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud-Native architecture
&lt;/h2&gt;

&lt;p&gt;Cloud-native technologies help organizations build and run scalable applications in dynamic environments such as public, private, and hybrid cloud. The cloud-native approach heavily relies on utilizing Cloud platforms, Containers, Microservices, DevOps, Immutable Infrastructure, and Service mesh.&lt;/p&gt;

&lt;p&gt;In the past, it was very common to see applications poorly designed &amp;amp; utilized. Cloud-native architecture introduced much-needed wisdom on how to effectively design applications and infrastructure. The design patterns and practices created throughout the years of mistakes in evolution provided us with the solution of best practices that focus on Application availability, Cost management, Efficiency, and reliability. The approach in Cloud-Native must enable loosely coupled systems that are reliable, resilient, and highly available along with easier management and observability.&lt;/p&gt;

&lt;p&gt;Cloud-native technology is built on 4 pillars that provide a strong foundation for its architecture.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Microservices&lt;/li&gt;
&lt;li&gt;Container Orchestration&lt;/li&gt;
&lt;li&gt;DevOps &amp;amp; Automation&lt;/li&gt;
&lt;li&gt;CI/CD Pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Microservices
&lt;/h2&gt;

&lt;p&gt;Before Cloud-Native architecture design, traditional systems were developed as single centralized applications with tightly coupled services and components making them difficult to deploy. This tightly coupled behavior between different components can potentially affect the entire application even with a minor change.&lt;/p&gt;

&lt;p&gt;Microservice Architecture is the fundamental pillar of Cloud-Native. They are small, focused &amp;amp; independent services which makes them easier to develop and deploy. When working with monolith applications, the application needs to be rebuilt and redeployed as a whole every time we do a single change which makes it less flexible. Services can't be developed with different technologies and due to its single source of truth, high availability &amp;amp; scalability are nearly impossible to achieve. However, tracking bugs and issues in a monolith is quite easy and communication is fairly simple. For large enterprises and complex systems, the monolith will never be a good option due to its limitations with flexibility.&lt;/p&gt;

&lt;p&gt;That's where Microservices Architecture comes into play. It is a fine-grained architecture for your system where services are further divided or broken into smaller services as Task-level services. The philosophy that Microservice architecture follows is to Keep services small and decoupled and should do only one thing (Unix philosophy). That's why they are called Microservices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb260flkht5y204shzepf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb260flkht5y204shzepf.png" alt="Microservices Architecture" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Container orchestration
&lt;/h2&gt;

&lt;p&gt;In Pre-containerization, apps were developed as single instances on a single machine that could consume resources of your hardware. Then Virtual Machines have been introduced to distribute the workload for different instances. Multiple instances of the application can run on a single computer by having multiple Virtual Machines. But VMs consume resources too, as they are full-fledged operating systems running on an existing operating system. To solve those issues, Containers have been introduced. They are lightweight processes that run the applications in an isolated environment irrespective of what version you have for that application runtime.&lt;/p&gt;

&lt;p&gt;They enable developers to package their applications to ship, deploy, and run on any platform. Containers are the core of Cloud-Native. There are many container technologies or tools that exist right now but the most popular one right now is Docker. It is the most developer-friendly tool to work with your workloads. According to Stack Overflow’s 2023 Developer Survey docker has been the most-used tool in the developer's community.&lt;/p&gt;

&lt;p&gt;Orchestration on the other hand is a technique used to manage multiple containers at once. Cloud Native follows the principle of microservice architecture, so when you work with containers for those services, it becomes really hard &amp;amp; less efficient to manage containers for these microservices manually. So to orchestrate or manage the containers we used tools like Kubernetes, Docker Swarm, Apache Mesos, etc. These tools help developers tackle the problem by automating the scalability, load balancing, availability, scheduling, deployment, and networking of containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Devops &amp;amp; automation
&lt;/h2&gt;

&lt;p&gt;DevOps or Development Operations, is one of the go-to strategies to include in your cloud-native architecture. The term Dev and Ops combines practices that allow us to adopt a Cloud-Native environment to ensure that a company constantly delivers quality software. DevOps is a paradigm-shifting approach to creating software. It has in recent times become the defective method for building, testing, delivering, and managing software applications.&lt;/p&gt;

&lt;p&gt;The core idea that DevOps embodies is that both development and operation teams should work closely together throughout the entire software life-cycle. From the initial development, right through to the installation, running, and maintenance of the application. This is an important cultural shift that needs to be embraced for successful DevOps adoption. &lt;/p&gt;

&lt;p&gt;Cloud-Native emphasizes automating the workflows and by applying the principles of DevOps, organizations can achieve automation and ensure the quick delivery of the systems. DevOps practices promote collaboration between development and operations teams. Automation of tasks such as provisioning, configuration management, and monitoring accelerates the deployment process and enhances operational efficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  CI/CD pipelines
&lt;/h2&gt;

&lt;p&gt;When implementing DevOps practices into your cloud-native applications, you create automated workflows to build, test, deliver, and deploy the system for your environment. To achieve those steps, DevOps engineers create CI/CD pipelines to automate workflows. CI stands for Continous Integration and CD stands for Continous Deployment. It connects the development and operations to automate building, testing, and deploying the application. This practice ensures rapid and reliable software delivery, fostering agility and innovation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Cloud-native architecture is an approach or pattern to design, develop, and deploy your systems using the above pillars we discussed. These pillars collectively enable cloud-native applications to achieve qualities like scalability, resilience, flexibility, and speed. By adhering to these principles, organizations can take full advantage of cloud infrastructure, allowing them to innovate rapidly and deliver value to customers more efficiently.&lt;/p&gt;

&lt;p&gt;If you like my article, please like and share feedback and let me know in the comments. And don't forget to follow me on &lt;a href="https://www.linkedin.com/in/mquanit/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;, &lt;a href="https://github.com/Mohammad-Quanit" rel="noopener noreferrer"&gt;Github&lt;/a&gt;, &lt;a href="https://twitter.com/mquanit" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Peace ✌🏻&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>cloudnative</category>
      <category>cloudcomputing</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Docker Container Security - Practices to Consider</title>
      <dc:creator>Mohammad Quanit</dc:creator>
      <pubDate>Thu, 13 Jul 2023 12:18:21 +0000</pubDate>
      <link>https://dev.to/docker/docker-container-security-practices-to-consider-1b6j</link>
      <guid>https://dev.to/docker/docker-container-security-practices-to-consider-1b6j</guid>
      <description>&lt;p&gt;Hello Engineers, In this article, I am going to share what container security is, some practices and standards for container security, why you should care about container security, some tools that can help us to make our containers less vulnerable.&lt;/p&gt;

&lt;p&gt;Although there is nothing like fully secured everything in I.T industry still engineers or security engineers are required to make things less vulnerable so that your app won't get hacked or some bad guy not try to access important information (that could be leaked from containers) and also because that's what they are paid for, right.  &lt;/p&gt;

&lt;p&gt;So before moving forward, we will be looking at a bit of docker architecture as we are learning security in the docker context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt;&lt;/strong&gt;, the most popular open-source containerization tool also standard for most of the platforms of the container launched in 2013 by Docker Inc.  Engineers can easily create, deploy, and run applications in a self-contained environment called a container.  It quickly gained popularity among developers and system administrators because it simplified the process of deploying applications across different environments, such as development, testing, and production&lt;/p&gt;

&lt;p&gt;Docker has a full-fledged  Architecture that contains some specific components that are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Docker Client: This is a command-line tool that allows users to interact with the Docker platform.&lt;/li&gt;
&lt;li&gt;Docker Daemon: This is the background process that runs on the host machine and manages the containers.&lt;/li&gt;
&lt;li&gt;Docker Images: An image is a lightweight, standalone, executable package that includes everything needed to run the software, including the code, libraries, and dependencies.&lt;/li&gt;
&lt;li&gt;Docker Registry: This is a repository that stores Docker images.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The above details of Docker are essential to understand how docker works internally. But this article mostly covers related to security, so I won't go into further details of how docker actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Container Security
&lt;/h2&gt;

&lt;p&gt;If we talk about container security, whether we work on Docker, LXD, RKT, Apache Mesos or any other tools your organization uses, the principles will be the same for overall container security. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Container security&lt;/em&gt;&lt;/strong&gt; refers to the practices and technologies used to protect containerized applications. It can be any kind of application either micro-services, SPAs (Single page applications), Utilities or API etc. Those practices are used to protect containers from unauthorized data access, malicious attacks, and other security threats.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.docker.com/resources/what-container/" rel="noopener noreferrer"&gt;Containers&lt;/a&gt; are a lightweight and portable way to package and deploy applications, but they also introduce new security risks that need to be addressed as containers share the host kernel and can be vulnerable to attacks if not properly secured.&lt;/p&gt;








&lt;h2&gt;
  
  
  Container Security Considerations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;&lt;strong&gt;Host/Kernel Security&lt;/strong&gt;&lt;/u&gt;:
Containers share the host kernels, which means any vulnerabilities in kernel or host can affect all the containers running on that host. It is important to keep the host system secure by regularly updating security patches, using anti-malware software, implementing other host security measures, Run container security tools like &lt;a href="https://github.com/docker/docker-bench-security" rel="noopener noreferrer"&gt;docker-bench-security&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;&lt;strong&gt;Access Control&lt;/strong&gt;&lt;/u&gt;:
Containers should be run with the least amount of privileges necessary to perform their tasks, and access to containers and their associated resources. Always run it as &lt;strong&gt;non-root user&lt;/strong&gt;. It is best to create a new user to perform and access docker resources. Running your containers on rootless mode will verify that your application environment is safe.&lt;/li&gt;
&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;&lt;strong&gt;Container Monitoring&lt;/strong&gt;&lt;/u&gt;:
Monitoring is important for container activity and log events to detect potential vulnerabilities and incidents, and to have processes in place for responding to security incidents and implementing remediation measures.  Use tools like Docker Logging and Use Docker's health check feature to periodically check the status of containerized applications.&lt;/li&gt;
&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;&lt;strong&gt;Image Vulnerability Scanning&lt;/strong&gt;&lt;/u&gt;:
Containers are created from images, which can contain software vulnerabilities, malware, or other security risks. It is important to use trusted and verified images, scan images for vulnerabilities, and follow best practices for image security.
&lt;u&gt;Periodic scanning&lt;/u&gt; allows you to keep your images updated and audit critical directories and files. Tools like &lt;a href="https://anchore.com/blog/docker-image-security-in-5-minutes-or-less/" rel="noopener noreferrer"&gt;Anchore&lt;/a&gt;, &lt;a href="https://github.com/quay/clair" rel="noopener noreferrer"&gt;Clair&lt;/a&gt;, or &lt;a href="https://aquasecurity.github.io/trivy/v0.43/" rel="noopener noreferrer"&gt;Trivy&lt;/a&gt; can scan container images and provide vulnerability reports.&lt;/li&gt;
&lt;/ul&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;u&gt;&lt;strong&gt;Docker Security Policies&lt;/strong&gt;&lt;/u&gt;:
Docker environment allows use to setup our security policies in order to make our containers secure. Only use Docker images from trusted and verified sources. Create policies to restrict image pulls to approved Docker registries and repositories. Enforce Container strict isolation to prevent malicious attacks. We can use Use container runtime options like &lt;code&gt;--privileged=false&lt;/code&gt; and &lt;code&gt;--cap-drop&lt;/code&gt; to limit container capabilities. Create policies to implement network segmentation and firewall rules to control container communication. We can use Use Docker's built-in network features or there are some other tools like &lt;strong&gt;&lt;u&gt;&lt;a href="https://cilium.io/" rel="noopener noreferrer"&gt;cillium&lt;/a&gt;&lt;/u&gt;&lt;/strong&gt; or &lt;strong&gt;&lt;u&gt;&lt;a href="https://github.com/docker/docker-bench-security" rel="noopener noreferrer"&gt;docker-bench&lt;/a&gt;&lt;/u&gt;&lt;/strong&gt; to isolate containers and define ingress/egress rules.
 &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Container security is a critical consideration when using containerization technologies like Docker and Kubernetes. By implementing best practices for container security, organizations can reduce the risk of security incidents and protect their applications and data from unauthorized access and other security threats.&lt;/p&gt;



&lt;p&gt;If you like my article, please like and share feedback and let me know in comments. And don't forget to follow me on &lt;a href="https://www.linkedin.com/in/mquanit/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;, &lt;a href="https://github.com/Mohammad-Quanit" rel="noopener noreferrer"&gt;Github&lt;/a&gt;, &lt;a href="https://twitter.com/mquanit" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Peace ✌🏻&lt;/p&gt;

</description>
      <category>docker</category>
      <category>container</category>
      <category>security</category>
      <category>practices</category>
    </item>
    <item>
      <title>Test Driven Development &amp; Go</title>
      <dc:creator>Mohammad Quanit</dc:creator>
      <pubDate>Sun, 16 Apr 2023 11:33:11 +0000</pubDate>
      <link>https://dev.to/mquanit/test-driven-development-go-1okh</link>
      <guid>https://dev.to/mquanit/test-driven-development-go-1okh</guid>
      <description>&lt;p&gt;In this article I'll be discussing about Test Driven Development, its approaches, best practices in the context of Golang with some code examples. Here's the &lt;a href="https://github.com/Mohammad-Quanit/Go-Tdd" rel="noopener noreferrer"&gt;github repo link&lt;/a&gt; from where I am going to show you some code examples &lt;/p&gt;

&lt;p&gt;Test Driven Development in short (TDD) is a software development approach where you write test cases for a small piece of code before writing the actual code. The goal is to ensure that the code meets specific requirements and behaves correctly in various scenarios. By writing tests first, you can catch errors early in the development process, and ensure that your code is easy to test, maintain, and refactor. &lt;/p&gt;

&lt;p&gt;TDD is a cycle of writing a test, seeing it fail, writing code to pass the test, and then refactoring the code to improve its quality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F57tlq4z4ggwwxrwgh8mm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F57tlq4z4ggwwxrwgh8mm.png" alt="test driven development tdd - lifecycle" width="800" height="554"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Motivations for TDD
&lt;/h2&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Helps catch errors early: By writing tests before writing the actual code, TDD can help catch errors early in the development process. This reduces the time spent on debugging and ensures that your code is more reliable. &lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensures code meets requirements: TDD ensures that your code meets specific requirements and behaves correctly in various scenarios. This makes it easier to maintain and refactor your code as your project evolves. &lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improves code quality: TDD encourages developers to write modular, testable, and maintainable code. This improves the overall quality of the codebase and reduces technical debt. &lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reduces the cost of change: Since TDD ensures that your code is well-tested and modular, it reduces the cost of making changes to your codebase. This is particularly useful in large projects where changes can have significant impacts on the system. &lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enables faster development: TDD can actually help speed up development by reducing the time spent on debugging and ensuring that new code doesn't break existing functionality. &lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Facilitates collaboration: TDD provides a common language and framework for collaboration between developers and other stakeholders. This helps ensure that everyone is on the same page and that the project moves forward smoothly. &lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Boosts confidence: TDD gives developers confidence in their code by ensuring that it behaves correctly and meets specific requirements. This confidence can lead to better decisions, more creativity, and more efficient development. &lt;br&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;



&lt;h2&gt;
  
  
  Testing in Golang
&lt;/h2&gt;

&lt;p&gt;Golang provides a built-in &lt;strong&gt;testing tool&lt;/strong&gt; that automates the process of running tests. The tool, called "&lt;strong&gt;go test&lt;/strong&gt;", is easy to use and can be run from the command line. The Go testing package provides a range of functions and methods for creating and running tests. It includes functions for comparing values, reporting errors, and running tests in parallel.&lt;/p&gt;

&lt;p&gt;The testing package also provides helper functions for reporting errors, such as "Error," "Errorf," and "Fail." These functions can be used to report errors that occur during the test.&lt;/p&gt;

&lt;p&gt;Here's an example of a Test Case written in Go:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;TestSum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Sum of numbers in array"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;numbers&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="n"&gt;got&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;Sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;numbers&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;want&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="m"&gt;16&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;got&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="n"&gt;want&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"got %d, want %d, given %d"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;got&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;want&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;numbers&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Points to remember when writing test case in golang.&lt;/p&gt;

&lt;p&gt;The file name for the test case must end with &lt;code&gt;_test.go&lt;/code&gt; i.e &lt;code&gt;main_test.go&lt;/code&gt; so that Go tool can detect what file to execute when running &lt;code&gt;go test&lt;/code&gt; command to run test cases.&lt;/p&gt;

&lt;p&gt;Function for the test code, must be starts with &lt;code&gt;Test&lt;/code&gt; keyword i.e &lt;code&gt;TestSum&lt;/code&gt;. That's how go tool knows to run that test functions written in test files. &lt;/p&gt;

&lt;p&gt;Go's testing tool can also generate a &lt;strong&gt;coverage report,&lt;/strong&gt; which shows how much of your code is covered by tests. Command for getting test coverage is&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go &lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;-cover&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Go's testing package also includes support for benchmarks, which can be used to measure the performance of your code. Benchmark functions have a specific signature and are executed multiple times to provide an accurate measurement of performance. Command to run benchmarks on test cases.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;go &lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;-bench&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Table Driven Testing
&lt;/h2&gt;

&lt;p&gt;Go's testing package also supports table-driven tests, which allow you to test a function with multiple inputs and expected outputs. This can be useful for testing edge cases and ensuring that your code is robust. &lt;/p&gt;

&lt;p&gt;Here's an example of Table Driven Test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;TestSum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;cases&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
        &lt;span class="n"&gt;num1&lt;/span&gt;        &lt;span class="kt"&gt;int&lt;/span&gt;
        &lt;span class="n"&gt;num2&lt;/span&gt;        &lt;span class="kt"&gt;int&lt;/span&gt;
        &lt;span class="n"&gt;expected&lt;/span&gt;    &lt;span class="kt"&gt;int&lt;/span&gt;
    &lt;span class="p"&gt;}{&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"1 + 2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;num1&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;        &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;num2&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;        &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;expected&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;    &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"3 + 4"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;num1&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;        &lt;span class="m"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;num2&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;        &lt;span class="m"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;expected&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;    &lt;span class="m"&gt;7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"10 + 45"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;num1&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;        &lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;num2&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;        &lt;span class="m"&gt;45&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;expected&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;    &lt;span class="m"&gt;70&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tt&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="n"&gt;cases&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;testing&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;T&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;Sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;tt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;num1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;num2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="n"&gt;tt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;expected&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Errorf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"expected %d, but got %d"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tt&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;expected&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  TDD Best Practices to follow
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The fundamental principle of TDD is to write tests before writing the actual code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tests should be small and focused on a single piece of functionality. This allows you to easily pinpoint errors when they occur.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use Go testing tool to simplify and speed up your testing workflow.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Table-driven tests allow you to test a function with multiple inputs and expected outputs. This can help you catch edge cases and ensure your code is robust.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When writing tests that rely on external dependencies, use mocks to simulate the behaviour of those dependencies. This allows you to test your code in isolation and avoid flaky tests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use this feedback to refactor your code and make it more modular, maintainable, and testable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure your tests are updated to reflect any changes you make to your code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Code coverage tools like go test -cover can help you identify areas of your code that are not adequately covered by your tests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use a continuous integration (CI) tool to automate your testing workflow. This ensures that your tests are run regularly and that any errors are caught early in the development process.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Tools by Go Community
&lt;/h2&gt;

&lt;p&gt;Go comes with its built-in testing tool, but there are some open source tools available as well created by Go community all around the world. Here are some of them that are mostly used in Go projects:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Gomega - Matcher/Assertion lib &lt;a href="https://github.com/onsi/gomega" rel="noopener noreferrer"&gt;https://github.com/onsi/gomega&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GoCheck - Featured rich testing lib &lt;a href="https://github.com/go-check/check" rel="noopener noreferrer"&gt;https://github.com/go-check/check&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Testify - Toolkit for mocks, assertions &lt;a href="https://github.com/stretchr/testify" rel="noopener noreferrer"&gt;https://github.com/stretchr/testify&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;GoMock - A dedicated mocking framework &lt;a href="https://github.com/golang/mock" rel="noopener noreferrer"&gt;https://github.com/golang/mock&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ginkgo - A BDD testing framework for expressive specs &lt;a href="https://github.com/onsi/ginkgo" rel="noopener noreferrer"&gt;https://github.com/onsi/ginkgo&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Test Driven Development is a mental model where you write test cases before actual production code. It required some practice if you haven't written any test case before. Either you're working on real-world projects or just starting out, you can start exploring and do hands on on it.&lt;/p&gt;

&lt;p&gt;If you like my article, please like and share feedback and let me know in comments. And don't forget to follow me on &lt;a href="https://www.linkedin.com/in/mquanit/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;, &lt;a href="https://github.com/Mohammad-Quanit" rel="noopener noreferrer"&gt;Github&lt;/a&gt;, &lt;a href="https://twitter.com/mquanit" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Peace ✌🏻&lt;/p&gt;

</description>
      <category>tdd</category>
      <category>go</category>
      <category>testing</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
