<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yusuf Adeyemo</title>
    <description>The latest articles on DEV Community by Yusuf Adeyemo (@yusadolat).</description>
    <link>https://dev.to/yusadolat</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yusadolat"/>
    <language>en</language>
    <item>
      <title>Understanding TOTP: What Really Happens When You Generate That 6-Digit Code</title>
      <dc:creator>Yusuf Adeyemo</dc:creator>
      <pubDate>Mon, 08 Dec 2025 16:26:40 +0000</pubDate>
      <link>https://dev.to/yusadolat/understanding-totp-what-really-happens-when-you-generate-that-6-digit-code-1ael</link>
      <guid>https://dev.to/yusadolat/understanding-totp-what-really-happens-when-you-generate-that-6-digit-code-1ael</guid>
      <description>&lt;p&gt;This article started from a tweet.&lt;/p&gt;

&lt;p&gt;Someone on Twitter said they "lowkey want to understand the technology behind Google Authenticator" and I dropped a quick reply - explaining that it's basically TOTP: your device and the server share a secret key, both compute a code using HMAC-SHA1 and the current 30-second time window. No network calls. No "previous code." Same secret + same time slice = same 6-digit code.&lt;/p&gt;

&lt;p&gt;That reply got some traction, and a few people DM me for a deeper breakdown. So here we are.&lt;/p&gt;

&lt;p&gt;If you've ever wondered how your phone generates the exact same 6-digit code the server expects - with no internet request, no sync, nothing - this one's for you.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6c1cx4yz4rr0ozst0dv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6c1cx4yz4rr0ozst0dv.png" width="800" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem With Passwords
&lt;/h2&gt;

&lt;p&gt;Passwords are static. Once someone has it, they have it forever - or until you change it. Even with a strong password, you're one phishing attack or database breach away from compromise.&lt;/p&gt;

&lt;p&gt;Two-factor authentication fixes this by adding something that changes. But here's the catch - if your phone needs to call a server every time to get a new code, that's a point of failure. What happens when you're offline? On a plane? In a basement with no signal?&lt;/p&gt;

&lt;p&gt;This is where TOTP comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  TOTP - Time-based One-Time Password
&lt;/h2&gt;

&lt;p&gt;TOTP is defined in RFC 6238, but don't let the RFC scare you. The core idea is dead simple:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Both your phone and the server share a secret. They both know the current time. They both do the same math. They both get the same answer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That's it. No network calls. No synchronization requests. Just two parties doing identical calculations independently.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Setup - That QR Code You Scanned
&lt;/h2&gt;

&lt;p&gt;When you enable 2FA on any service, they show you a QR code. That QR code contains a URL that looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;otpauth://totp/MyService:yusuf@yusadolat.me?secret=JBSWY3DPEHPK3PXP&amp;amp;issuer=MyService
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The important part is the &lt;code&gt;secret&lt;/code&gt;. This is a base32-encoded string that both your authenticator app and the server will store. This shared secret is the foundation of everything.&lt;/p&gt;

&lt;p&gt;You scan it once. Your app saves it. The server saves it. They never exchange it again.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Math - How Codes Get Generated
&lt;/h2&gt;

&lt;p&gt;Every 30 seconds, both sides perform this calculation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Get the current time window&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Take the current Unix timestamp and divide by 30. Floor it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;time_step = floor(current_unix_time / 30)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Right now, as I write this, the Unix timestamp is around 1733644800. Divided by 30, floored, gives us 57788160. This number changes every 30 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Run HMAC-SHA1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Feed the time step and the shared secret into HMAC-SHA1:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;hmac_result = HMAC-SHA1(secret, time_step)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This produces a 20-byte hash. It looks like random garbage, but it's deterministic - same inputs always give same outputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Dynamic Truncation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;20 bytes is too long for humans to type. So we extract 4 bytes from a specific position (determined by the last nibble of the hash), convert to an integer, and take modulo 1,000,000.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;offset = hmac_result[19] &amp;amp; 0x0fcode = (hmac_result[offset:offset+4] &amp;amp; 0x7fffffff) % 1000000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Boom. You have your 6-digit code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Is Actually Clever
&lt;/h2&gt;

&lt;p&gt;Think about what just happened:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;No network needed&lt;/strong&gt; - Your phone doesn't call anyone. The server doesn't push anything. Both just compute.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Codes expire automatically&lt;/strong&gt; - Because time moves forward, old codes become useless. Even if someone shoulder-surfs your code, they have maybe 30 seconds to use it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Can't predict future codes&lt;/strong&gt; - Without the secret, you can't compute tomorrow's codes. The HMAC function is one-way.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Replay attacks fail&lt;/strong&gt; - Use a code once, the server marks that time window as used. Try it again, rejected.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  When Things Go Wrong
&lt;/h2&gt;

&lt;p&gt;The system assumes both parties agree on what time it is. This is usually fine - your phone syncs with NTP servers, and servers have accurate clocks.&lt;/p&gt;

&lt;p&gt;But I've seen people with phones that have "manual time" set, drifting by minutes. Their codes stop working and they have no idea why. The server is computing codes for 10:45:00, their phone is computing for 10:43:00. Different time windows, different codes.&lt;/p&gt;

&lt;p&gt;Most implementations allow a small tolerance - they'll accept codes from one time window before or after. But drift too far and you're locked out.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Recovery Code Situation
&lt;/h2&gt;

&lt;p&gt;Those backup codes you're told to save somewhere? They're not TOTP. They're just long random strings stored in a database. Use one, it gets deleted. No time component, no algorithm - just a simple lookup.&lt;/p&gt;

&lt;p&gt;Save them. Seriously. Losing access to your authenticator without backup codes is a special kind of pain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Show Me The Code
&lt;/h2&gt;

&lt;p&gt;Here's a minimal Python implementation to make this concrete:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import hmacimport hashlibimport structimport timeimport base64def generate_totp(secret: str) -&amp;gt; str: # Decode the base32 secret key = base64.b32decode(secret.upper()) # Get current time step (30-second window) time_step = int(time.time()) // 30 # Pack as big-endian 8-byte integer time_bytes = struct.pack('&amp;gt;Q', time_step) # Compute HMAC-SHA1 hmac_hash = hmac.new(key, time_bytes, hashlib.sha1).digest() # Dynamic truncation offset = hmac_hash[-1] &amp;amp; 0x0f code_int = struct.unpack('&amp;gt;I', hmac_hash[offset:offset+4])[0] code_int &amp;amp;= 0x7fffffff code = code_int % 1000000 return f'{code:06d}'# Test itsecret = 'JBSWY3DPEHPK3PXP' # Example secret, This is what you add setup key on Google Authprint(generate_totp(secret))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Want to see it work in real-time? Here's how to test:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open Google Authenticator (or any TOTP app)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Tap the &lt;strong&gt;+&lt;/strong&gt; button to add a new account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;"Enter a setup key"&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enter any name (e.g., "TOTP Test")&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the key, enter: &lt;code&gt;JBSWY3DPEHPK3PXP&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure it's set to &lt;strong&gt;Time-based&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save it&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now run the Python script. The 6-digit code it prints should match what's showing in your authenticator app. If you're a few seconds off, wait for the next 30-second window and try again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrapping Up
&lt;/h2&gt;

&lt;p&gt;There's no cloud magic happening when your authenticator generates codes. It's just math - the same math running independently on your device and the server, anchored to the same clock.&lt;/p&gt;

&lt;p&gt;Understanding this changes how you think about 2FA. It's not some opaque security feature. It's a clever application of cryptographic primitives that's been working reliably for over a decade.&lt;/p&gt;

&lt;p&gt;Next time you punch in those 6 digits, you'll know exactly what's happening behind the scenes.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you found this useful, I write about DevOps, security, and cloud infrastructure. Connect with me on Twitter&lt;/em&gt; &lt;a href="https://twitter.com/Yusadolat" rel="noopener noreferrer"&gt;&lt;em&gt;@Yusadolat&lt;/em&gt;&lt;/a&gt;&lt;em&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;]]&amp;gt;&lt;/p&gt;

</description>
      <category>totp</category>
      <category>authentication</category>
      <category>google</category>
      <category>howitworks</category>
    </item>
    <item>
      <title>How to Speed Up AWS CodeBuild Docker Builds by 25% or more Using ECR as a Remote Cache</title>
      <dc:creator>Yusuf Adeyemo</dc:creator>
      <pubDate>Mon, 20 Oct 2025 14:25:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-speed-up-aws-codebuild-docker-builds-by-25-or-more-using-ecr-as-a-remote-cache-1a8m</link>
      <guid>https://dev.to/aws-builders/how-to-speed-up-aws-codebuild-docker-builds-by-25-or-more-using-ecr-as-a-remote-cache-1a8m</guid>
      <description>&lt;p&gt;Have you ever sat there waiting for your CodeBuild project to rebuild your entire Docker image... again? Even though you only changed a single line of code?&lt;/p&gt;

&lt;p&gt;Yeah, me too. And it's frustrating.&lt;/p&gt;

&lt;p&gt;Today, I'm going to show you how I reduced our Docker build times from &lt;strong&gt;~6 minutes down to ~2 minutes&lt;/strong&gt; by implementing Amazon ECR as a persistent cache backend. This is based on an &lt;a href="https://aws.amazon.com/blogs/devops/reduce-docker-image-build-time-on-aws-codebuild-using-amazon-ecr-as-a-remote-cache/" rel="noopener noreferrer"&gt;official AWS blog post&lt;/a&gt;, but I'll walk you through the practical implementation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrho1uc86zoq5gcmz6io.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flrho1uc86zoq5gcmz6io.png" alt="time comparer befpre build snd after" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Why Your Builds Are Slow
&lt;/h2&gt;

&lt;p&gt;Here's the thing about AWS CodeBuild: every build runs in a &lt;strong&gt;completely fresh, isolated environment&lt;/strong&gt;. That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No build artifacts carry over between builds&lt;/li&gt;
&lt;li&gt;Every build starts from scratch&lt;/li&gt;
&lt;li&gt;CodeBuild's "local cache" is temporary and unreliable (works on a "best-effort" basis)&lt;/li&gt;
&lt;li&gt;If your builds happen at different times throughout the day, the local cache probably isn't helping you&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So even if you only changed one line in your code, CodeBuild rebuilds every single Docker layer. Every. Single. Time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: ECR Registry Cache Backend
&lt;/h2&gt;

&lt;p&gt;The solution is surprisingly elegant: store your Docker layer cache &lt;strong&gt;persistently&lt;/strong&gt; in Amazon ECR (Elastic Container Registry). Think of it as a separate "cache image" that lives alongside your actual application image.&lt;/p&gt;

&lt;p&gt;Here's how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;First Build&lt;/strong&gt;: Build from scratch, then export the cache to ECR as a separate image&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subsequent Builds&lt;/strong&gt;: Import the cache from ECR, rebuild only what changed, export the updated cache back&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The beauty? Your cache is always available, no matter when you trigger a build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft950stit2omipb5jh73r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft950stit2omipb5jh73r.png" alt="build cache" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What You'll Need
&lt;/h2&gt;

&lt;p&gt;Before we start, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An existing AWS CodeBuild project that builds Docker images&lt;/li&gt;
&lt;li&gt;An ECR repository where your images are stored&lt;/li&gt;
&lt;li&gt;IAM permissions for your CodeBuild role to push/pull from ECR (if you can already push images, you're good!)&lt;/li&gt;
&lt;li&gt;About 10 minutes to implement this&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step-by-Step Implementation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Understanding Your Current Buildspec
&lt;/h3&gt;

&lt;p&gt;Your current buildspec probably looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.2&lt;/span&gt;
&lt;span class="na"&gt;phases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;install&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;aws ecr get-login-password | docker login ...&lt;/span&gt;

  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker build -t myapp:latest .&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker tag myapp:latest $ECR_REPO:latest&lt;/span&gt;

  &lt;span class="na"&gt;post_build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker push $ECR_REPO:latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the "basic" approach. Every build starts from zero.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;[📸 IMAGE SUGGESTION: Split screen showing "Basic Build" vs "Cached Build" with layer rebuilding visualization]&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Add Cache Tag Variable
&lt;/h3&gt;

&lt;p&gt;First, let's define a separate tag for our cache image. In your &lt;code&gt;install&lt;/code&gt; phase, add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;install&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CACHE_TAG=dev-cache&lt;/span&gt;  &lt;span class="c1"&gt;# or prod-cache, staging-cache, etc.&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;IMAGE_TAG=latest&lt;/span&gt;     &lt;span class="c1"&gt;# your actual app image tag&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a separate cache image (e.g., &lt;code&gt;myapp:dev-cache&lt;/code&gt;) that's distinct from your application image (&lt;code&gt;myapp:latest&lt;/code&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Create the Buildx Builder
&lt;/h3&gt;

&lt;p&gt;Here's the key part: Docker's default builder doesn't support registry cache backends. We need to create a new builder using &lt;strong&gt;buildx&lt;/strong&gt; with the &lt;strong&gt;containerd&lt;/strong&gt; driver.&lt;/p&gt;

&lt;p&gt;Add this to your &lt;code&gt;install&lt;/code&gt; phase:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;install&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# ... your existing commands ...&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker buildx create --name containerd --driver=docker-container --driver-opt default-load=true --use || docker buildx use containerd&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What's happening here?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;docker buildx create&lt;/code&gt;: Creates a new builder instance&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--driver=docker-container&lt;/code&gt;: Uses containerd (required for registry cache)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--driver-opt default-load=true&lt;/code&gt;: Loads built images into local Docker (important!)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;|| docker buildx use containerd&lt;/code&gt;: If the builder already exists, just switch to it&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 4: Replace Your Docker Build Command
&lt;/h3&gt;

&lt;p&gt;Now replace your regular &lt;code&gt;docker build&lt;/code&gt; command with the new buildx version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;docker buildx build \&lt;/span&gt;
        &lt;span class="s"&gt;--builder=containerd \&lt;/span&gt;
        &lt;span class="s"&gt;--cache-from type=registry,ref=$ECR_REPO:$CACHE_TAG \&lt;/span&gt;
        &lt;span class="s"&gt;--cache-to type=registry,ref=$ECR_REPO:$CACHE_TAG,mode=max,image-manifest=true \&lt;/span&gt;
        &lt;span class="s"&gt;-t $ECR_REPO:$IMAGE_TAG \&lt;/span&gt;
        &lt;span class="s"&gt;--load \&lt;/span&gt;
        &lt;span class="s"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me break down what each flag does:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfuhbptz3p933ypyfsnx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqfuhbptz3p933ypyfsnx.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;--builder=containerd&lt;/code&gt;: Use the builder we just created&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--cache-from type=registry,ref=$ECR_REPO:$CACHE_TAG&lt;/code&gt;: &lt;strong&gt;Import cache&lt;/strong&gt; from ECR&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;--cache-to type=registry,ref=$ECR_REPO:$CACHE_TAG,mode=max,image-manifest=true&lt;/code&gt;: &lt;strong&gt;Export cache&lt;/strong&gt; back to ECR

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;mode=max&lt;/code&gt;: Export all layers (recommended for best caching)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;image-manifest=true&lt;/code&gt;: Required for ECR storage&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;-t $ECR_REPO:$IMAGE_TAG&lt;/code&gt;: Tag your final image as usual&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;--load&lt;/code&gt;: Load the built image into local Docker (so you can run it in post_build)&lt;/li&gt;

&lt;li&gt;
&lt;code&gt;.&lt;/code&gt;: Your Dockerfile location&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 5: Complete Example Buildspec
&lt;/h3&gt;

&lt;p&gt;Here's what a complete, production-ready buildspec looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.2&lt;/span&gt;
&lt;span class="na"&gt;phases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;install&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo Logging in to Amazon ECR&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CACHE_TAG=dev-cache&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;IMAGE_TAG=latest&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;ECR_REPO=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/myapp&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker buildx create --name containerd --driver=docker-container --driver-opt default-load=true --use || docker buildx use containerd&lt;/span&gt;

  &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo Build started on `date`&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;docker buildx build \&lt;/span&gt;
          &lt;span class="s"&gt;--builder=containerd \&lt;/span&gt;
          &lt;span class="s"&gt;--cache-from type=registry,ref=$ECR_REPO:$CACHE_TAG \&lt;/span&gt;
          &lt;span class="s"&gt;--cache-to type=registry,ref=$ECR_REPO:$CACHE_TAG,mode=max,image-manifest=true \&lt;/span&gt;
          &lt;span class="s"&gt;-t $ECR_REPO:$IMAGE_TAG \&lt;/span&gt;
          &lt;span class="s"&gt;--load \&lt;/span&gt;
          &lt;span class="s"&gt;.&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker tag $ECR_REPO:$IMAGE_TAG $ECR_REPO:latest&lt;/span&gt;

  &lt;span class="na"&gt;post_build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo Build completed on `date`&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker push $ECR_REPO:$IMAGE_TAG&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker push $ECR_REPO:latest&lt;/span&gt;

&lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;imageDefinitions.json&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 6: Update Your CodeBuild Project
&lt;/h3&gt;

&lt;p&gt;You can update your buildspec in two ways:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 1: If your buildspec is in your repo&lt;/strong&gt;&lt;br&gt;
Just commit the changes and push. CodeBuild will pick up the new buildspec automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 2: If your buildspec is defined in CodeBuild&lt;/strong&gt;&lt;br&gt;
Use the AWS CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws codebuild update-project &lt;span class="nt"&gt;--name&lt;/span&gt; your-project-name &lt;span class="nt"&gt;--cli-input-json&lt;/span&gt; file://buildspec.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or update it through the AWS Console: CodeBuild → Your Project → Edit → Buildspec&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Expect: First Build vs Subsequent Builds
&lt;/h2&gt;

&lt;h3&gt;
  
  
  First Build (The Investment)
&lt;/h3&gt;

&lt;p&gt;Your first build after implementing this will actually take &lt;strong&gt;slightly longer&lt;/strong&gt; (maybe 30-60 seconds more). Don't panic! This is normal.&lt;/p&gt;

&lt;p&gt;Here's what's happening:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating the buildx builder (~5-10 seconds)&lt;/li&gt;
&lt;li&gt;Attempting to import cache (fails - no cache exists yet)&lt;/li&gt;
&lt;li&gt;Building all layers from scratch&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exporting the cache to ECR&lt;/strong&gt; (new step, adds ~20-40 seconds)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You'll see messages like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=&amp;gt; importing cache manifest from $ECR_REPO:dev-cache
=&amp;gt; error: not found
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is expected! The cache doesn't exist yet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Subsequent Builds (The Payoff)
&lt;/h3&gt;

&lt;p&gt;This is where the magic happens. Your next builds will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Successfully import the cache from ECR&lt;/li&gt;
&lt;li&gt;Identify which layers haven't changed&lt;/li&gt;
&lt;li&gt;Reuse cached layers (fast!)&lt;/li&gt;
&lt;li&gt;Rebuild only the changed layers&lt;/li&gt;
&lt;li&gt;Export the updated cache&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Expected time savings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Before&lt;/strong&gt;: 6-7 minutes (full rebuild every time)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;After&lt;/strong&gt;: 5-5.5 minutes (25-30% faster!)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Savings&lt;/strong&gt;: 1-2 minutes per build&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're doing 10 builds a day, that's &lt;strong&gt;10-20 minutes saved daily&lt;/strong&gt;. Over a month? That's &lt;strong&gt;5-10 hours&lt;/strong&gt; of compute time and costs saved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verifying It's Working
&lt;/h2&gt;

&lt;p&gt;After your first build completes, check your ECR repository. You should now see &lt;strong&gt;two image tags&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;[📸 IMAGE SUGGESTION: ECR Console screenshot showing two images: "myapp:latest" and "myapp:dev-cache"]&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Your application image (e.g., &lt;code&gt;latest&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Your cache image (e.g., &lt;code&gt;dev-cache&lt;/code&gt;)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The cache image will be roughly the same size as your application image - this is normal! It's storing all the layer information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting Common Issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Issue 1: "buildx: command not found"
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Update your CodeBuild image to a newer version. Use &lt;code&gt;aws/codebuild/standard:7.0&lt;/code&gt; or later (or the ARM equivalent).&lt;/p&gt;

&lt;h3&gt;
  
  
  Issue 2: Cache Import Keeps Failing
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Check your IAM permissions. Your CodeBuild role needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;ecr:BatchGetImage&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ecr:GetDownloadUrlForLayer&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ecr:BatchCheckLayerAvailability&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ecr:PutImage&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ecr:InitiateLayerUpload&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ecr:UploadLayerPart&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;ecr:CompleteLayerUpload&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Issue 3: Build Hangs at "exporting cache"
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;: Make sure &lt;code&gt;privilegedMode: true&lt;/code&gt; is enabled in your CodeBuild environment settings. This is required for Docker-in-Docker operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced: Multi-Environment Setup
&lt;/h2&gt;

&lt;p&gt;If you have multiple environments (dev, staging, prod), use different cache tags for each:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;CACHE_TAG=${ENVIRONMENT}-cache&lt;/span&gt;  &lt;span class="c1"&gt;# Results in: dev-cache, staging-cache, prod-cache&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dev builds don't invalidate staging cache&lt;/li&gt;
&lt;li&gt;Each environment maintains its own optimized cache&lt;/li&gt;
&lt;li&gt;You can still share a base cache if needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cost Considerations
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Storage Cost&lt;/strong&gt;: You're now storing an additional cache image in ECR. At roughly the same size as your app image, this might add $0.10-0.50/month per repository depending on image size.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compute Savings&lt;/strong&gt;: Faster builds = less compute time. If you're saving 1-2 minutes per build and doing 10 builds/day, that's roughly 10-20 fewer compute hours per month. At ~$0.005/minute for &lt;code&gt;BUILD_GENERAL1_SMALL&lt;/code&gt;, you could save $3-6/month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Net Result&lt;/strong&gt;: Typically a small net savings, plus the huge developer experience win of faster feedback loops.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By implementing ECR as a remote cache backend for your CodeBuild Docker builds, you get:&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;25-30% faster build times&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Persistent, reliable caching&lt;/strong&gt; across all builds&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Better layer reuse&lt;/strong&gt; with intelligent cache management&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Minimal code changes&lt;/strong&gt; (just updating your buildspec)&lt;br&gt;&lt;br&gt;
✅ &lt;strong&gt;Cost savings&lt;/strong&gt; from reduced compute time  &lt;/p&gt;

&lt;p&gt;The implementation is straightforward, and the benefits are immediate (after the first build). Give it a try on your next project!&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/devops/reduce-docker-image-build-time-on-aws-codebuild-using-amazon-ecr-as-a-remote-cache/" rel="noopener noreferrer"&gt;AWS Blog: Reduce Docker image build time using ECR as a remote cache&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/build/buildx/" rel="noopener noreferrer"&gt;Docker Buildx Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.docker.com/build/cache/backends/" rel="noopener noreferrer"&gt;Docker Cache Backends Documentation&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Got questions or run into issues?&lt;/strong&gt; Drop a comment below - I'd love to hear about your experience implementing this!&lt;/p&gt;




</description>
      <category>aws</category>
      <category>docker</category>
      <category>cache</category>
    </item>
    <item>
      <title>How to Speed Up AWS CodeBuild Docker Builds by 25% or more Using ECR as a Remote Cache</title>
      <dc:creator>Yusuf Adeyemo</dc:creator>
      <pubDate>Mon, 20 Oct 2025 08:50:14 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-speed-up-aws-codebuild-docker-builds-by-25-or-more-using-ecr-as-a-remote-cache-3e90</link>
      <guid>https://dev.to/aws-builders/how-to-speed-up-aws-codebuild-docker-builds-by-25-or-more-using-ecr-as-a-remote-cache-3e90</guid>
      <description>&lt;p&gt;Have you ever sat there waiting for your CodeBuild project to rebuild your entire Docker image... again? Even though you only changed a single line of code?&lt;/p&gt;

&lt;p&gt;Yeah, me too. And it's frustrating.&lt;/p&gt;

&lt;p&gt;Today, I'm going to show you how I reduced our Docker build times from &lt;strong&gt;~7 minutes down to ~2 minutes&lt;/strong&gt; by implementing Amazon ECR as a persistent cache backend. This is based on an &lt;a href="https://aws.amazon.com/blogs/devops/reduce-docker-image-build-time-on-aws-codebuild-using-amazon-ecr-as-a-remote-cache/" rel="noopener noreferrer"&gt;official AWS blog post&lt;/a&gt;, but I'll walk you through the practical implementation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnzsc8xqehah6i4rw7cd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnzsc8xqehah6i4rw7cd.png" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Why Your Builds Are Slow
&lt;/h2&gt;

&lt;p&gt;Here's the thing about AWS CodeBuild: every build runs in a &lt;strong&gt;completely fresh, isolated environment&lt;/strong&gt;. That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;No build artifacts carry over between builds&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Every build starts from scratch&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CodeBuild's "local cache" is temporary and unreliable (works on a "best-effort" basis)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If your builds happen at different times throughout the day, the local cache probably isn't helping you&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So even if you only changed one line in your code, CodeBuild rebuilds every single Docker layer. Every. Single. Time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: ECR Registry Cache Backend
&lt;/h2&gt;

&lt;p&gt;The solution is surprisingly elegant: store your Docker layer cache &lt;strong&gt;persistently&lt;/strong&gt; in Amazon ECR (Elastic Container Registry). Think of it as a separate "cache image" that lives alongside your actual application image.&lt;/p&gt;

&lt;p&gt;Here's how it works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;First Build&lt;/strong&gt; : Build from scratch, then export the cache to ECR as a separate image&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Subsequent Builds&lt;/strong&gt; : Import the cache from ECR, rebuild only what changed, export the updated cache back&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The beauty? Your cache is always available, no matter when you trigger a build.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak891kmradmovzyvwqqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak891kmradmovzyvwqqr.png" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What You'll Need
&lt;/h2&gt;

&lt;p&gt;Before we start, make sure you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An existing AWS CodeBuild project that builds Docker images&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An ECR repository where your images are stored&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IAM permissions for your CodeBuild role to push/pull from ECR (if you can already push images, you're good!)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;About 10 minutes to implement this&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step-by-Step Implementation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Understanding Your Current Buildspec
&lt;/h3&gt;

&lt;p&gt;Your current buildspec probably looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2phases: install: commands: - aws ecr get-login-password | docker login ... build: commands: - docker build -t myapp:latest . - docker tag myapp:latest $ECR_REPO:latest post_build: commands: - docker push $ECR_REPO:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the "basic" approach. Every build starts from zero.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;[📸 IMAGE SUGGESTION: Split screen showing "Basic Build" vs "Cached Build" with layer rebuilding visualization]&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Add Cache Tag Variable
&lt;/h3&gt;

&lt;p&gt;First, let's define a separate tag for our cache image. In your &lt;code&gt;install&lt;/code&gt; phase, add:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;install: commands: - CACHE_TAG=dev-cache # or prod-cache, staging-cache, etc. - IMAGE_TAG=latest # your actual app image tag
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates a separate cache image (e.g., &lt;code&gt;myapp:dev-cache&lt;/code&gt;) that's distinct from your application image (&lt;code&gt;myapp:latest&lt;/code&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Create the Buildx Builder
&lt;/h3&gt;

&lt;p&gt;Here's the key part: Docker's default builder doesn't support registry cache backends. We need to create a new builder using &lt;strong&gt;buildx&lt;/strong&gt; with the &lt;strong&gt;containerd&lt;/strong&gt; driver.&lt;/p&gt;

&lt;p&gt;Add this to your &lt;code&gt;install&lt;/code&gt; phase:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;install: commands: # ... your existing commands ... - docker buildx create --name containerd --driver=docker-container --driver-opt default-load=true --use || docker buildx use containerd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;What's happening here?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker buildx create&lt;/code&gt;: Creates a new builder instance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--driver=docker-container&lt;/code&gt;: Uses containerd (required for registry cache)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--driver-opt default-load=true&lt;/code&gt;: Loads built images into local Docker (important!)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;|| docker buildx use containerd&lt;/code&gt;: If the builder already exists, just switch to it&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 4: Replace Your Docker Build Command
&lt;/h3&gt;

&lt;p&gt;Now replace your regular &lt;code&gt;docker build&lt;/code&gt; command with the new buildx version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;build: commands: - | docker buildx build \ --builder=containerd \ --cache-from type=registry,ref=$ECR_REPO:$CACHE_TAG \ --cache-to type=registry,ref=$ECR_REPO:$CACHE_TAG,mode=max,image-manifest=true \ -t $ECR_REPO:$IMAGE_TAG \ --load \ .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let me break down what each flag does:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxvqecj12ld067zac143.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhxvqecj12ld067zac143.png" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--builder=containerd&lt;/code&gt;: Use the builder we just created&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--cache-from type=registry,ref=$ECR_REPO:$CACHE_TAG&lt;/code&gt;: &lt;strong&gt;Import cache&lt;/strong&gt; from ECR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--cache-to type=registry,ref=$ECR_REPO:$CACHE_TAG,mode=max,image-manifest=true&lt;/code&gt;: &lt;strong&gt;Export cache&lt;/strong&gt; back to ECR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;-t $ECR_REPO:$IMAGE_TAG&lt;/code&gt;: Tag your final image as usual&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--load&lt;/code&gt;: Load the built image into local Docker (so you can run it in post_build)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;.&lt;/code&gt;: Your Dockerfile location&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 5: Complete Example Buildspec
&lt;/h3&gt;

&lt;p&gt;Here's what a complete, production-ready buildspec looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2phases: install: commands: - echo Logging in to Amazon ECR - aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com - CACHE_TAG=dev-cache - IMAGE_TAG=latest - ECR_REPO=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/myapp - docker buildx create --name containerd --driver=docker-container --driver-opt default-load=true --use || docker buildx use containerd build: commands: - echo Build started on `date` - | docker buildx build \ --builder=containerd \ --cache-from type=registry,ref=$ECR_REPO:$CACHE_TAG \ --cache-to type=registry,ref=$ECR_REPO:$CACHE_TAG,mode=max,image-manifest=true \ -t $ECR_REPO:$IMAGE_TAG \ --load \ . - docker tag $ECR_REPO:$IMAGE_TAG $ECR_REPO:latest post_build: commands: - echo Build completed on `date` - docker push $ECR_REPO:$IMAGE_TAG - docker push $ECR_REPO:latestartifacts: files: - imageDefinitions.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 6: Update Your CodeBuild Project
&lt;/h3&gt;

&lt;p&gt;You can update your buildspec in two ways:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 1: If your buildspec is in your repo&lt;/strong&gt; Just commit the changes and push. CodeBuild will pick up the new buildspec automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 2: If your buildspec is defined in CodeBuild&lt;/strong&gt; Use the AWS CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws codebuild update-project --name your-project-name --cli-input-json file://buildspec.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or update it through the AWS Console: CodeBuild Your Project Edit Buildspec&lt;/p&gt;

&lt;h2&gt;
  
  
  What to Expect: First Build vs Subsequent Builds
&lt;/h2&gt;

&lt;h3&gt;
  
  
  First Build (The Investment)
&lt;/h3&gt;

&lt;p&gt;Your first build after implementing this will actually take &lt;strong&gt;slightly longer&lt;/strong&gt; (maybe 30-60 seconds more). Don't panic! This is normal.&lt;/p&gt;

&lt;p&gt;Here's what's happening:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Creating the buildx builder (~5-10 seconds)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attempting to import cache (fails - no cache exists yet)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Building all layers from scratch&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Exporting the cache to ECR&lt;/strong&gt; (new step, adds ~20-40 seconds)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You'll see messages like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;=&amp;gt; importing cache manifest from $ECR_REPO:dev-cache=&amp;gt; error: not found
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is expected! The cache doesn't exist yet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Subsequent Builds (The Payoff)
&lt;/h3&gt;

&lt;p&gt;This is where the magic happens. Your next builds will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Successfully import the cache from ECR&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identify which layers haven't changed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reuse cached layers (fast!)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rebuild only the changed layers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Export the updated cache&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Expected time savings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Before&lt;/strong&gt; : 6-7 minutes (full rebuild every time)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;After&lt;/strong&gt; : 5-5.5 minutes (25-30% faster!)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Savings&lt;/strong&gt; : 1-2 minutes per build&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're doing 10 builds a day, that's &lt;strong&gt;10-20 minutes saved daily&lt;/strong&gt;. Over a month? That's &lt;strong&gt;5-10 hours&lt;/strong&gt; of compute time and costs saved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Verifying It's Working
&lt;/h2&gt;

&lt;p&gt;After your first build completes, check your ECR repository. You should now see &lt;strong&gt;two image tags&lt;/strong&gt; :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Your application image (e.g., &lt;code&gt;latest&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Your cache image (e.g., &lt;code&gt;dev-cache&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The cache image will be roughly the same size as your application image - this is normal! It's storing all the layer information.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting Common Issues
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Issue 1: "buildx: command not found"
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt; : Update your CodeBuild image to a newer version. Use &lt;code&gt;aws/codebuild/standard:7.0&lt;/code&gt; or later (or the ARM equivalent).&lt;/p&gt;

&lt;h3&gt;
  
  
  Issue 2: Cache Import Keeps Failing
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt; : Check your IAM permissions. Your CodeBuild role needs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ecr:BatchGetImage&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ecr:GetDownloadUrlForLayer&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ecr:BatchCheckLayerAvailability&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ecr:PutImage&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ecr:InitiateLayerUpload&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ecr:UploadLayerPart&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;ecr:CompleteLayerUpload&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Issue 3: Build Hangs at "exporting cache"
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt; : Make sure &lt;code&gt;privilegedMode: true&lt;/code&gt; is enabled in your CodeBuild environment settings. This is required for Docker-in-Docker operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advanced: Multi-Environment Setup
&lt;/h2&gt;

&lt;p&gt;If you have multiple environments (dev, staging, prod), use different cache tags for each:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- CACHE_TAG=${ENVIRONMENT}-cache # Results in: dev-cache, staging-cache, prod-cache
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Dev builds don't invalidate staging cache&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each environment maintains its own optimized cache&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You can still share a base cache if needed&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cost Considerations
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Storage Cost&lt;/strong&gt; : You're now storing an additional cache image in ECR. At roughly the same size as your app image, this might add $0.10-0.50/month per repository depending on image size.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compute Savings&lt;/strong&gt; : Faster builds = less compute time. If you're saving 1-2 minutes per build and doing 10 builds/day, that's roughly 10-20 fewer compute hours per month. At ~$0.005/minute for &lt;code&gt;BUILD_GENERAL1_SMALL&lt;/code&gt;, you could save $3-6/month.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Net Result&lt;/strong&gt; : Typically a small net savings, plus the huge developer experience win of faster feedback loops.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By implementing ECR as a remote cache backend for your CodeBuild Docker builds, you get:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Faster build times&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
 &lt;strong&gt;Persistent, reliable caching&lt;/strong&gt; across all builds&lt;br&gt;&lt;br&gt;
 &lt;strong&gt;Better layer reuse&lt;/strong&gt; with intelligent cache management&lt;br&gt;&lt;br&gt;
 &lt;strong&gt;Minimal code changes&lt;/strong&gt; (just updating your buildspec)&lt;br&gt;&lt;br&gt;
 &lt;strong&gt;Cost savings&lt;/strong&gt; from reduced compute time&lt;/p&gt;

&lt;p&gt;The implementation is straightforward, and the benefits are immediate (after the first build). Give it a try on your next project!&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/devops/reduce-docker-image-build-time-on-aws-codebuild-using-amazon-ecr-as-a-remote-cache/" rel="noopener noreferrer"&gt;AWS Blog: Reduce Docker image build time using ECR as a remote cache&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.docker.com/build/buildx/" rel="noopener noreferrer"&gt;Docker Buildx Documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.docker.com/build/cache/backends/" rel="noopener noreferrer"&gt;Docker Cache Backends Documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Got questions or run into issues?&lt;/strong&gt; Drop a comment below - I'd love to hear about your experience implementing this!&lt;/p&gt;




&lt;p&gt;]]&amp;gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>caching</category>
      <category>ecr</category>
    </item>
    <item>
      <title>Do You Really Know the Difference Between L1, L2, and L3 CDK Constructs?</title>
      <dc:creator>Yusuf Adeyemo</dc:creator>
      <pubDate>Sat, 26 Jul 2025 08:23:36 +0000</pubDate>
      <link>https://dev.to/aws-builders/do-you-really-know-the-difference-between-l1-l2-and-l3-cdk-constructs-i43</link>
      <guid>https://dev.to/aws-builders/do-you-really-know-the-difference-between-l1-l2-and-l3-cdk-constructs-i43</guid>
      <description>&lt;p&gt;After you complete this article, you will have a solid understanding of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;What L1, L2, and L3 constructs actually are and when to use each&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Why AWS created three different abstraction levels (and the hidden benefits)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How to avoid the most common CDK construct mistakes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When to break the rules and mix construct levels&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Have You Ever Been Confused by CDK Construct Levels?
&lt;/h2&gt;

&lt;p&gt;If you've ever started learning AWS CDK, you've probably encountered code like this and wondered why there are so many ways to create the same S3 bucket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Wait, what? Three different ways to create a bucket? 
import { CfnBucket } from 'aws-cdk-lib/aws-s3'; 
import { Bucket } from 'aws-cdk-lib/aws-s3'; 
import { StaticWebsite } from '@aws-solutions-constructs/aws-s3-cloudfront';
// Which one should I use? 🤔
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then you see this error that makes you question everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: Cannot use property type 'BucketProps' with L1 construct 'CfnBucket'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;"But they're both S3 buckets! Why can't I use the same properties?"&lt;/p&gt;

&lt;p&gt;Let me help you understand these construct levels once and for all.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are CDK Constructs Anyway?
&lt;/h2&gt;

&lt;p&gt;Think of CDK constructs as LEGO blocks for your cloud infrastructure. Just like LEGO has basic bricks, specialized pieces, and complete sets, CDK has three levels of constructs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdewp7ef9niv1p62g99c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmdewp7ef9niv1p62g99c.png" alt="A pyramid diagram showing three levels - L1 at the bottom (Basic LEGO bricks), L2 in the middle (Specialized LEGO pieces like wheels, windows), and L3 at the top (Complete LEGO sets like a castle or spaceship" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Level 1 (L1) Constructs: The Raw CloudFormation Experience
&lt;/h2&gt;

&lt;p&gt;L1 constructs are the most basic building blocks. They start with &lt;code&gt;Cfn&lt;/code&gt; (short for CloudFormation) and map directly to CloudFormation resources. No magic, no shortcuts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { CfnBucket } from 'aws-cdk-lib/aws-s3';

const bucket = new CfnBucket(this, 'MyL1Bucket', {
  bucketName: 'my-raw-bucket-2025',
  versioningConfiguration: {
    status: 'Enabled'
  },
  publicAccessBlockConfiguration: {
    blockPublicAcls: true,
    blockPublicPolicy: true,
    ignorePublicAcls: true,
    restrictPublicBuckets: true
  }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice how verbose this is? You have to configure EVERYTHING manually. It's like writing CloudFormation in TypeScript.&lt;/p&gt;

&lt;h3&gt;
  
  
  When Would You Ever Use L1 Constructs?
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Brand New AWS Services&lt;/strong&gt; - When AWS releases a new service, L1 support comes first&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Debugging L2/L3 Issues&lt;/strong&gt; - Sometimes you need to see what's really happening&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Migrating from CloudFormation&lt;/strong&gt; - Direct 1:1 mapping makes migration easier&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Edge Cases&lt;/strong&gt; - When you need a specific CloudFormation property not exposed in L2&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Level 2 (L2) Constructs: The Sweet Spot
&lt;/h2&gt;

&lt;p&gt;L2 constructs are what most developers use daily. They provide sensible defaults, helper methods, and hide complexity while still giving you control.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Bucket, BucketEncryption } from 'aws-cdk-lib/aws-s3';

const bucket = new Bucket(this, 'MyL2Bucket', {
  bucketName: 'my-friendly-bucket-2025',
  versioned: true,
  encryption: BucketEncryption.S3_MANAGED,
  removalPolicy: RemovalPolicy.DESTROY // Much cleaner!
});

// Look at these helper methods! 
bucket.grantRead(myLambdaFunction);
bucket.addLifecycleRule({
  expiration: Duration.days(90)
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See the difference? L2 constructs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Use friendly property names (&lt;code&gt;versioned&lt;/code&gt; vs &lt;code&gt;versioningConfiguration&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide helper methods (&lt;code&gt;grantRead()&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set security best practices by default&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Handle resource dependencies automatically&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc51mo6gynpdks6bohrlk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc51mo6gynpdks6bohrlk.png" alt="Side-by-side comparison showing L1 code (20+ lines) vs L2 code (15 lines) creating the same bucket" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Level 3 (L3) Constructs: Complete Solutions
&lt;/h2&gt;

&lt;p&gt;L3 constructs (also called patterns) are pre-built architectures for common use cases. They combine multiple resources into a working solution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { StaticWebsite } from '@aws-solutions-constructs/aws-s3-cloudfront';

const website = new StaticWebsite(this, 'MyWebsite', {
  websiteIndexDocument: 'index.html',
  websiteErrorDocument: 'error.html'
});

// That's it! You just created:
// - S3 bucket with proper website configuration
// - CloudFront distribution
// - Origin Access Identity
// - Proper IAM policies
// - HTTPS redirect
// - Security headers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With just a few lines, you get a production-ready static website setup that would take hundreds of lines in L1.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Mistakes That Will Drive You Crazy
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Mistake #1: Mixing Property Types
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// 🚫 This won't work!
const bucket = new CfnBucket(this, 'MyBucket', {
  encryption: BucketEncryption.S3_MANAGED // L2 property type
});

// ✅ Use the correct L1 property type
const bucket = new CfnBucket(this, 'MyBucket', {
  bucketEncryption: {
    serverSideEncryptionConfiguration: [{
      serverSideEncryptionByDefault: {
        sseAlgorithm: 'AES256'
      }
    }]
  }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Mistake #2: Assuming L3 Constructs Are Always Better
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Using L3 when you need specific customization
const website = new StaticWebsite(this, 'MyWebsite', {
  // Oh no! I can't set specific CloudFront behaviors
  // or custom cache policies here! 😱
});

// Sometimes L2 gives you more control
const bucket = new Bucket(this, 'WebBucket');
const distribution = new CloudFrontWebDistribution(this, 'MyDist', {
  // Full control over every setting
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Mistake #3: Not Using Escape Hatches
&lt;/h3&gt;

&lt;p&gt;What if you need to modify an L2 construct's underlying L1 resource?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const bucket = new Bucket(this, 'MyBucket');

// Access the L1 construct (escape hatch)
const cfnBucket = bucket.node.defaultChild as CfnBucket;

// Now you can set ANY CloudFormation property
cfnBucket.analyticsConfigurations = [{
  id: 'my-analytics',
  storageClassAnalysis: {
    dataExport: {
      destination: {
        bucketArn: 'arn:aws:s3:::my-analytics-bucket'
      }
    }
  }
}];
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Hidden Benefits of Each Level
&lt;/h2&gt;

&lt;h3&gt;
  
  
  L1 Benefits You Didn't Know About
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Immediate AWS Feature Support&lt;/strong&gt; - No waiting for CDK updates&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;CloudFormation Parity&lt;/strong&gt; - Easy to convert existing templates&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Learning Tool&lt;/strong&gt; - Understand what L2 constructs do under the hood&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  L2 Benefits That Save Time
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic Security Defaults&lt;/strong&gt; - Encryption enabled by default&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-Service Integration&lt;/strong&gt; - &lt;code&gt;grant*&lt;/code&gt; methods handle IAM for you&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Type Safety&lt;/strong&gt; - Catch errors at compile time, not deployment&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  L3 Benefits for Real Projects
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Proven Architectures&lt;/strong&gt; - AWS Solutions Constructs follow best practices&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compliance Ready&lt;/strong&gt; - Many patterns are pre-validated for security&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rapid Prototyping&lt;/strong&gt; - Get a working system in minutes&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Creating Your Own L3 Construct
&lt;/h2&gt;

&lt;p&gt;Here's a practical example of creating your own pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { Construct } from 'constructs';
import { Bucket, BucketEncryption } from 'aws-cdk-lib/aws-s3';
import { Function, Runtime, Code } from 'aws-cdk-lib/aws-lambda';
import * as path from 'path';

export class SecureDataProcessor extends Construct {
  public readonly bucket: Bucket;
  public readonly processor: Function;

  constructor(scope: Construct, id: string) {
    super(scope, id);

    // Create encrypted bucket
    this.bucket = new Bucket(this, 'DataBucket', {
      encryption: BucketEncryption.KMS_MANAGED,
      versioned: true,
      enforceSSL: true
    });

    // Create processing Lambda
    this.processor = new Function(this, 'Processor', {
      runtime: Runtime.NODEJS_18_X,
      handler: 'index.handler',
      code: Code.fromAsset(path.join(__dirname, 'lambda'))
    });

    // Wire them together
    this.bucket.grantRead(this.processor);
    this.bucket.addEventNotification(
      EventType.OBJECT_CREATED,
      new LambdaDestination(this.processor)
    );
  }
}

// Now anyone can use your pattern!
const dataProcessor = new SecureDataProcessor(this, 'MyProcessor');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  When to Use Each Construct Level
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use L1 when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;You need bleeding-edge AWS features&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Migrating from CloudFormation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Debugging CDK issues&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You need a specific CloudFormation property&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use L2 when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Building most production applications&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You want security best practices by default&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You need to integrate multiple services&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You value developer productivity&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Use L3 when:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Implementing common patterns&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Rapid prototyping&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enforcing organizational standards&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You don't need heavy customization&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff786t4xwzoshf1m0k4qd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff786t4xwzoshf1m0k4qd.png" alt="A comparison table with checkmarks showing when to use each construct level" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future of CDK Constructs
&lt;/h2&gt;

&lt;p&gt;AWS is continuously improving CDK constructs. New services get L1 support immediately through CloudFormation, L2 constructs follow within weeks or months, and the community creates L3 patterns for common use cases.&lt;/p&gt;

&lt;p&gt;Remember: There's no "wrong" construct level. Each serves a purpose, and experienced CDK developers often mix levels within the same application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Was this article helpful for you? If so, kindly follow on twitter &lt;a class="mentioned-user" href="https://dev.to/yusadolat"&gt;@yusadolat&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;]]&amp;gt;&lt;/p&gt;

</description>
      <category>awscdk</category>
      <category>iac</category>
      <category>typescript</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>How to Pay AWS Bills in Naira: A Quick Guide</title>
      <dc:creator>Yusuf Adeyemo</dc:creator>
      <pubDate>Tue, 14 Jan 2025 20:03:44 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-now-supports-naira-payments-making-cloud-services-more-accessible-to-nigerians-3o4h</link>
      <guid>https://dev.to/aws-builders/aws-now-supports-naira-payments-making-cloud-services-more-accessible-to-nigerians-3o4h</guid>
      <description>&lt;p&gt;With AWS now supporting Naira, you can skip juggling foreign exchange and just focus on building. Heres how to set it up:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Log In to Your AWS Account&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Head to the AWS Console and sign in as usual.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Go to Billing and Cost Management&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Youll find this under the main menu.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Preferences and Settings&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once there, look for an option labeled &lt;em&gt;Payment Preferences&lt;/em&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Edit Payment Currency&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Click &lt;strong&gt;Edit&lt;/strong&gt; , then pick &lt;strong&gt;Nigerian Naira (NGN)&lt;/strong&gt; from the dropdown.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fghtdxk17pe2zpzzgv3gn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fghtdxk17pe2zpzzgv3gn.png" alt="default currency" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Save Changes&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Congrats, future invoices will be issued in Naira!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why This Matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No More FX Fees&lt;/strong&gt; Youre not burning extra cash on currency conversions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Local Payment Ease&lt;/strong&gt; Its simpler to manage local transactions and budgets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aligns with the Lagos Local Zone&lt;/strong&gt; Perfect if youre leveraging AWSs local zone in Nigeria.&lt;/p&gt;

&lt;p&gt;Thats it! Its quick, its easy, and it saves you from fiddling with exchange rates. Now you can invest the difference in testing, automations, or that next big idea.&lt;/p&gt;

&lt;p&gt;Peace to all, and happy building!&lt;/p&gt;

&lt;p&gt;]]&amp;gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>billing</category>
      <category>nigeria</category>
    </item>
    <item>
      <title>Nomad 101: The Simpler, Smarter Way to Orchestrate Applications</title>
      <dc:creator>Yusuf Adeyemo</dc:creator>
      <pubDate>Tue, 31 Dec 2024 17:34:58 +0000</pubDate>
      <link>https://dev.to/yusadolat/nomad-101-the-simpler-smarter-way-to-orchestrate-applications-dbe</link>
      <guid>https://dev.to/yusadolat/nomad-101-the-simpler-smarter-way-to-orchestrate-applications-dbe</guid>
      <description>&lt;p&gt;Nomad is a personal favorite when I need a straightforward, single-binary orchestrator that just works. Its built by HashiCorp, the folks behind Terraform and Vault, and it takes a minimalistic approach to scheduling and managing containerized (and even non-containerized) workloads. Nomad might be the perfect fit if youve ever felt that Kubernetes is overkill for a simpler workload.&lt;/p&gt;

&lt;p&gt;In this post, Ill walk you through installing Nomad, spinning it up in a small environment, and running a workload to see it in action. By the end, youll have a solid hands-on feel for how to use Nomad. Lets dive right in.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Nomad?
&lt;/h2&gt;

&lt;p&gt;For me, Nomad offers a couple of killer advantages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplicity&lt;/strong&gt; : Nomad is a single, self-contained binary that can manage containers, VMs, and standalone applications. Configuration is straightforward and uses HCL (HashiCorp Configuration Language), which you might already know from Terraform.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Low Overhead&lt;/strong&gt; : In contrast to something like Kubernetes, which demands multiple components (etcd, kube-scheduler, kube-apiserver, etc.), Nomad keeps the architecture lean, meaning fewer moving parts and less operational complexity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scale Without the Bloat&lt;/strong&gt; : Just because its simple doesnt mean its small-time. Nomad can run at massive scale. Start small on a single node, then grow into a cluster as your needs evolve.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Broad Workload Support&lt;/strong&gt; : Containerized apps are the norm these days, but if you have legacy apps or specialized workloads, Nomad accommodates them too. This flexibility makes it easier to transition older systems into orchestrated environments without rewriting everything.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Setting Up Nomad Locally
&lt;/h2&gt;

&lt;p&gt;Lets talk about setting up a Nomad environment on your local machine for a quick test. Ill assume youre running on some flavor of Linux or macOS. If youre on Windows, you can still follow along using WSL2 or a VM.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Download Nomad&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Head over to the official &lt;a href="https://www.nomadproject.io/downloads" rel="noopener noreferrer"&gt;Nomad Releases page&lt;/a&gt; and download the appropriate binary. Extract it, move it to a directory in your PATH (like &lt;code&gt;/usr/local/bin&lt;/code&gt;), and youre good to go. For instance:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Development Agent&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
Nomad has a dev mode, which is a single-process setup that runs the server and client in one goperfect for local testing. Simply run:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Nomad Architecture at a Glance
&lt;/h2&gt;

&lt;p&gt;In a more robust setup, Nomad is typically deployed as a &lt;strong&gt;cluster&lt;/strong&gt; of server nodes and client nodes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Server nodes&lt;/strong&gt; handle scheduling decisions and maintain cluster state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Client nodes&lt;/strong&gt; run workloads assigned to them by the servers.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But for now, dev mode is all we need. Later on, you could spin up a 3-node server cluster with as many clients as you want.&lt;/p&gt;




&lt;h2&gt;
  
  
  Running Your First Nomad Job
&lt;/h2&gt;

&lt;p&gt;A Nomad job describes what you want to run, how many instances, resource constraints, etc. Jobs are written in HCL, so itll feel familiar if youve ever used Terraform. Lets do a quick example by running a Docker-based web server.&lt;/p&gt;

&lt;h3&gt;
  
  
  Basic HCL Job File
&lt;/h3&gt;

&lt;p&gt;Create a file called &lt;code&gt;nginx.nomad&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;job "nginx-web" { datacenters = ["dc1"] type = "service" group "web-group" { count = 1 task "web" { driver = "docker" config { image = "nginx:latest" ports = ["http"] } resources { cpu = 100 memory = 128 } } network { port "http" { static = 8080 } } }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lets break it down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;job "nginx-web"&lt;/strong&gt; : Defines our job name and type. Were calling it a service because its a long-running service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;group "web-group"&lt;/strong&gt; : A group can contain multiple tasks that share resources and networking. Here, we only have one task.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;task "web"&lt;/strong&gt; : Tells Nomad to run an Nginx container. We specify the &lt;strong&gt;docker&lt;/strong&gt; driver.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;network&lt;/strong&gt; : Maps the containers port to a static host port (8080 in this case) so you can access it on &lt;code&gt;localhost:8080&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Run the Job
&lt;/h3&gt;

&lt;p&gt;Launch it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nomad job run nginx.nomad
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nomad will parse the file, create the job, and schedule it on the local dev client. If all goes well, youll see output indicating the job has been placed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verify Its Running
&lt;/h3&gt;

&lt;p&gt;Head to your browser at &lt;a href="http://localhost:4646/" rel="noopener noreferrer"&gt;http://localhost:4646&lt;/a&gt; and click on Jobs. You should see &lt;code&gt;nginx-web&lt;/code&gt; running. Now try &lt;a href="http://localhost:8080/" rel="noopener noreferrer"&gt;http://localhost:8080&lt;/a&gt; in your browser. Nginxs default Welcome page means its working!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4v0cu0x79cf4ac03d6su.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4v0cu0x79cf4ac03d6su.png" width="800" height="577"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Scaling the Service
&lt;/h2&gt;

&lt;p&gt;Nomad makes scaling super easy. Just update the &lt;code&gt;count&lt;/code&gt; parameter in your job file. For instance, change it to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;count = 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nomad job run nginx.nomad
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Nomad will place an additional instance of the container, though in dev mode youre still on a single node, so youll have multiple containers on the same host. In a multi-node cluster, Nomad automatically figures out which clients have room.&lt;/p&gt;




&lt;h2&gt;
  
  
  Stopping or Updating the Job
&lt;/h2&gt;

&lt;p&gt;If you want to stop the job, you can run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;nomad job stop nginx-web
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For updates, just modify the HCL file (like changing the Docker image to a different version), then re-run &lt;code&gt;nomad job run nginx.nomad&lt;/code&gt;. Nomad will handle rolling updates gracefully, spinning up new tasks before shutting down old ones (as long as you specify appropriate update stanzas).&lt;/p&gt;




&lt;h2&gt;
  
  
  Integrating with Other HashiCorp Tools
&lt;/h2&gt;

&lt;p&gt;Because Nomad shares the same style of configuration as Terraform and the same developer DNA as Vault and Consul, its easy to create an entire stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Consul&lt;/strong&gt; for service discovery and dynamic DNS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Vault&lt;/strong&gt; for secrets management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt; for provisioning the underlying infrastructure.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nomad can automatically register services with Consul, making them discoverable to other services in your environment. Storing secrets in Vault means you can dynamically inject credentials into your Nomad jobs. It all plays nicely together.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why I Use Nomad Over Alternatives
&lt;/h2&gt;

&lt;p&gt;Ive used Kubernetes for years, but Nomad is my go-to when:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Speed of Setup&lt;/strong&gt; : Nomad dev mode is unbelievably quick. One binary, one command, done.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fewer Dependencies&lt;/strong&gt; : I dont need etcd or a separate container runtime beyond Docker. Less to break, less to learn.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexibility&lt;/strong&gt; : I can run Docker tasks, raw exec tasks, or even handle batch jobs and system workloads in a single cluster.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Dont get me wrong Kubernetes excels in large, complex ecosystems. But if you want a more lightweight orchestrator or have a hybrid mix of containerized and legacy apps, Nomads a breath of fresh air.&lt;/p&gt;




&lt;h2&gt;
  
  
  Pro Tips (Anticipating What You Might Need Next)
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;High Availability&lt;/strong&gt; : If you plan to run in production, spin up at least three Nomad server nodes. That ensures if one server goes down, the cluster can still schedule workloads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Autopilot&lt;/strong&gt; : Nomads built-in autopilot features let you automatically manage upgrades, Raft snapshots, and more to keep the cluster healthy.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Authentication and ACLs&lt;/strong&gt; : In a multi-user setup, you can integrate Nomads ACL system to restrict who can submit jobs or read cluster data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Plugins&lt;/strong&gt; : There are driver plugins for everything from Docker to QEMU to AWS ECS tasks. You can run basically anything that can be launched from a command line or third-party tool.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Monitoring&lt;/strong&gt; : Nomad exposes metrics that are easy to integrate with Prometheus, Grafana, or whatever your favorite monitoring stack is.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;Nomad may look unassuming as a single binary, but dont let that fool you. Its a robust orchestrator that simplifies complex workload management. Whether youre prototyping a new service, gradually migrating from manual server management, or just want to avoid the overhead of a full-fledged Kubernetes stack, Nomad can handle it.&lt;/p&gt;

&lt;p&gt;Why not give it a shot in your own environment? If youve got that messy monolith or a small container workload, Nomad might be exactly the tool you need to keep everything running smoothly without drowning in complexity.&lt;/p&gt;




&lt;h3&gt;
  
  
  Sources
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://developer.hashicorp.com/nomad/docs" rel="noopener noreferrer"&gt;Nomad Official Documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/hashicorp/nomad" rel="noopener noreferrer"&gt;Nomad GitHub Repository&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hope this helps you get started with Nomad. Let me know if you run into any snags or come up with a clever integration Im always interested in new ways to push this awesome orchestrator.&lt;/p&gt;

&lt;p&gt;]]&amp;gt;&lt;/p&gt;

</description>
      <category>nomad</category>
      <category>hashicorp</category>
      <category>docker</category>
      <category>applications</category>
    </item>
    <item>
      <title>How I Leverage Raspberry Pi as a DevOps Engineer</title>
      <dc:creator>Yusuf Adeyemo</dc:creator>
      <pubDate>Mon, 16 Dec 2024 11:00:37 +0000</pubDate>
      <link>https://dev.to/yusadolat/how-i-leverage-raspberry-pi-as-a-devops-engineer-1llj</link>
      <guid>https://dev.to/yusadolat/how-i-leverage-raspberry-pi-as-a-devops-engineer-1llj</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8scwrkfuzrjabyjiot7x.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8scwrkfuzrjabyjiot7x.jpg" alt="Yusadolat Rasberry PI" width="800" height="457"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a DevOps engineer, I'm always looking for a cost-effective, reliable, and flexible way to prototype new ideas without overcommitting infrastructure resources. Sure, spinning up EC2 instances or provisioning dedicated hardware works, but when you want a low-power, low-cost sandbox, the &lt;strong&gt;Raspberry Pi&lt;/strong&gt; is hard to beat. Its an affordable, credit-card-sized computing powerhouse that helps me test concepts, automate environments, and even experiment with local AI without racking up unnecessary cloud fees or dealing with heavy metal servers.&lt;/p&gt;

&lt;p&gt;In this post, I'll share some real-world ways I integrate Raspberry Pi devices into my workflow. If you've never considered them as part of a professional DevOps toolkit, I hope this gives you a few reasons to start.&lt;/p&gt;




&lt;h3&gt;
  
  
  What is a Raspberry Pi?
&lt;/h3&gt;

&lt;p&gt;If you've never touched one, think of the Raspberry Pi as a tiny Linux-based computer board with just enough CPU, RAM, and storage to run a surprising range of workloads. It's been wildly popular among hobbyists, educators, and professionals alike. Thanks to its Linux roots, you can tap into a massive ecosystem of software, scripting, containers, and automation tools that feel instantly familiar to anyone from a DevOps background.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why Raspberry Pi Fits My Needs
&lt;/h3&gt;

&lt;p&gt;As a DevOps engineer, I've got plenty of choices for running workloads. But the Pi hits a sweet spot when I need something quick, cheap, and on-prem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost-Effective&lt;/strong&gt; : For the price of a mid-tier cloud instance running a few weeks, I can own a Pi outright and reuse it a million times over.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Energy-Efficient&lt;/strong&gt; : A Pi draws minimal power, so I can keep it running 24/7 without worrying about my electric bill.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Exceptionally Versatile&lt;/strong&gt; : Its a lab in a box. CI/CD runners, IoT hubs, mini Kubernetes clusters, AI inferencing boxes, local proxies, you name it.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  How I Use Raspberry Pi in My Workflow
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Prototyping and Experimental Builds
&lt;/h4&gt;

&lt;p&gt;When experimenting with a new microservice, pipeline, or integration, I often spin it up on a Pi first. This gives me a stable, always-on environment to validate code, run Docker containers, test APIs, and refine configurations. Its a great way to ensure my code and infrastructure definitions hold up before I commit cloud spend.&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Home Automation and IoT Management
&lt;/h4&gt;

&lt;p&gt;I like to say that my home is my first production environment. Using a Pi as a hub, paired with something like &lt;strong&gt;Home Assistant&lt;/strong&gt; I manage a network of sensors, lights, and other IoT devices. Not only is it fun, but it also lets me practice edge automation. This experience often translates back into my professional work, where edge computing scenarios are becoming more common.&lt;/p&gt;

&lt;h4&gt;
  
  
  3. Self-Hosted GitHub Actions Runners
&lt;/h4&gt;

&lt;p&gt;If you've worked with GitHub Actions, you know that hosted runners can quickly rack up costs or queue times. By using a Pi as a self-hosted runner, I keep certain build and test pipelines local and cost-controlled. Best of all, I have full control over the environment and dependencies, making it easy to debug issues right in my home office.&lt;/p&gt;

&lt;h4&gt;
  
  
  4. Local AI Experiments
&lt;/h4&gt;

&lt;p&gt;While you won't train GPT-4 on a Raspberry Pi, its still possible to run smaller models like Googles Gemma2 (2 billion parameters) for inference tasks. This is a great way to experiment with local AI workloads or test model-serving pipelines without relying on GPU-backed cloud instances. It's not going to replace a beefy workstation, but it's enough to poke around with models and APIs before deciding to scale up.&lt;/p&gt;

&lt;h4&gt;
  
  
  5. Network-Attached Storage (NAS) and Local File Serving
&lt;/h4&gt;

&lt;p&gt;If I need a quick-and-dirty NAS solution, I can set up a Pi with Samba or OpenMediaVault, attach some external storage, and voil: a lightweight NAS on my local network. It's not enterprise-level, but it's perfect for stashing logs, artifacts, or just sharing files among devices at home.&lt;/p&gt;




&lt;h3&gt;
  
  
  Why It Works for Me
&lt;/h3&gt;

&lt;p&gt;Raspberry Pi devices are more than just cheap boards; they represent a frictionless approach to experimentation. Instead of spending hours setting up cloud VMs or maintaining bulky servers, I have a small fleet of Pis that act as a test bed for ideas. They let me:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Quickly spin up and tear down environments on a budget.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Learn and iterate with minimal risk.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scale horizontally by adding more boards when I need them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Develop intuition for edge, IoT, and ARM-based workloads.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;The Raspberry Pi offers a unique blend of accessibility, affordability, and versatility. Whether I'm refining a new CI pipeline, tinkering with home automation, or trialing lightweight AI inference, the Pi is my go-to platform for hands-on exploration. Its a genuine force multiplier that's expanded the way I think about infrastructure and small-scale deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What about you? How have you put Raspberry Pi to work? If youve got a unique use case or a clever hack, let me know. Im always looking for fresh ways to push these tiny boards to their limits.&lt;/strong&gt;&lt;/p&gt;




</description>
      <category>raspberrypi</category>
      <category>devops</category>
      <category>ai</category>
      <category>github</category>
    </item>
    <item>
      <title>TDD vs BDD: Navigating the Testing Landscape in Modern Software Development</title>
      <dc:creator>Yusuf Adeyemo</dc:creator>
      <pubDate>Tue, 27 Aug 2024 08:00:12 +0000</pubDate>
      <link>https://dev.to/yusadolat/tdd-vs-bdd-navigating-the-testing-landscape-in-modern-software-development-35fe</link>
      <guid>https://dev.to/yusadolat/tdd-vs-bdd-navigating-the-testing-landscape-in-modern-software-development-35fe</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8s9y7cu4ut4x5ghxsno.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz8s9y7cu4ut4x5ghxsno.png" alt="Image description" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Introduction&lt;/p&gt;

&lt;p&gt;In the ever-evolving world of software development, testing methodologies play a crucial role in ensuring the quality and reliability of applications. Two prominent approaches that have gained significant traction in recent years are Test-Driven Development (TDD) and Behavior-Driven Development (BDD). While both methodologies share some common ground, they each bring unique perspectives to the testing process. This article delves into the intricacies of TDD and BDD, exploring their benefits, key differences, and how they can be effectively implemented in software projects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Understanding Test-Driven Development (TDD)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Exploring Behavior-Driven Development (BDD)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Comparing TDD and BDD&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implementing TDD and BDD in Your Projects&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Conclusion&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Understanding Test-Driven Development (TDD)
&lt;/h2&gt;

&lt;p&gt;Test-Driven Development is a software development process that relies on the repetition of short development cycles. This methodology encourages simple designs and instills confidence in the code by ensuring that every piece of functionality is thoroughly tested.&lt;/p&gt;

&lt;h3&gt;
  
  
  The TDD Process
&lt;/h3&gt;

&lt;p&gt;The TDD process follows a specific cycle:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Write a test for a new feature before implementing the code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run the new test to verify that it fails (as expected).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Write the minimum amount of code necessary to make the test pass.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run all tests to ensure the new code passes without breaking existing functionality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Refactor the code to improve its structure and remove any duplication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Repeat the cycle for each new feature or functionality.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Benefits of TDD
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Encourages simple, modular designs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provides immediate feedback on code correctness&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Builds a comprehensive suite of unit tests&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improves code quality and reduces bugs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Facilitates easier refactoring and maintenance&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Exploring Behavior-Driven Development (BDD)
&lt;/h2&gt;

&lt;p&gt;Behavior-Driven Development is an agile software development process that extends the principles of TDD. BDD emphasizes collaboration among developers, quality assurance professionals, and business partners to create a shared understanding of how an application should behave.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features of BDD
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Utilizes domain-specific scripting languages (DSLs)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Defines user behavior in simple English&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Converts English descriptions into automated test scripts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Focuses on the behavior of the application from an end-user perspective&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  BDD Scenario Examples
&lt;/h3&gt;

&lt;p&gt;BDD often uses scenario-based descriptions to define expected behavior. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Scenario: User adds an item to their shopping cart Given the user is on the product details page When the user selects a size "Medium" And the user clicks the "Add to Cart" button Then the item should be added to the user's shopping cart And the cart total should increase by 1 And the user should see a confirmation message "Item added to cart"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Comparing TDD and BDD
&lt;/h2&gt;

&lt;p&gt;While TDD and BDD share some common ground, they have distinct focuses and approaches:&lt;/p&gt;

&lt;h3&gt;
  
  
  Shared Benefits
&lt;/h3&gt;

&lt;p&gt;Both TDD and BDD offer several advantages to development teams:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Early detection of errors in requirements&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improved communication between team members&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reduced overall development costs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Higher code quality and fewer bugs&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Focus and Approach
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;TDD focuses on the functionality of individual components&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;BDD emphasizes the behavior of the application from a user's perspective&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;TDD tests are typically written in the same programming language as the application&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;BDD tests are often written in a more accessible, natural language format&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implementing TDD and BDD in Your Projects
&lt;/h2&gt;

&lt;p&gt;To successfully implement TDD or BDD in your software projects:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Choose the appropriate methodology based on your project's needs and team structure&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Invest in training and tools to support the chosen approach&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Start small and gradually expand the use of TDD or BDD across your projects&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Regularly review and refine your testing processes&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Foster a culture of collaboration and continuous improvement&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Both Test-Driven Development and Behavior-Driven Development offer valuable approaches to software testing and development. By understanding the strengths and differences of each methodology, development teams can make informed decisions about which approach best suits their projects. Whether you choose TDD, BDD, or a combination of both, implementing these methodologies can lead to higher quality software, improved team collaboration, and more satisfied end-users.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;References:&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Beck, K. (2002). Test-Driven Development: By Example. Addison-Wesley Professional.t
]]&amp;gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>tutorial</category>
      <category>testing</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>Enhancing Microservice Communication in AWS ECS with Service Discovery Techniques</title>
      <dc:creator>Yusuf Adeyemo</dc:creator>
      <pubDate>Sun, 11 Feb 2024 08:48:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/enhancing-microservice-communication-in-aws-ecs-with-service-discovery-techniques-2763</link>
      <guid>https://dev.to/aws-builders/enhancing-microservice-communication-in-aws-ecs-with-service-discovery-techniques-2763</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls8gsrkph61ond6s3lq0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls8gsrkph61ond6s3lq0.png" alt="Service discovery with AWS ECS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Service discovery is a vital component of modern distributed systems, enabling seamless communication and dynamic scaling in environments where services frequently change IPs, ports, or even hosts. AWS Elastic Container Service (ECS) integrates seamlessly with service discovery mechanisms, simplifying the deployment and operation of microservices architectures. In this comprehensive guide, we delve into the intricacies of service discovery within ECS, ensuring your applications remain resilient and scalable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding Service Discovery&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At its core, service discovery facilitates the dynamic detection and interaction among services in a distributed ecosystem. The challenge lies in the fluid nature of these services, which may traverse across different environments, necessitating a flexible approach to maintain connectivity. Service discovery transcends the limitations of static configurations, allowing services to communicate based on logical identifiers rather than hard-coded network addresses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Role of Service Discovery in ECS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon ECS simplifies service discovery by leveraging AWS Cloud Map, a fully managed service registry that automates the discovery of ECS services. Cloud Map enables your applications to discover resources by name, eliminating the need for manual IP management or service configuration. This abstraction not only enhances flexibility but also significantly reduces the overhead associated with deploying and managing microservices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementing Service Discovery in ECS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To utilize service discovery in ECS, you begin by registering your services with AWS Cloud Map. This process involves creating a namespace, which serves as a container for all service instances. Within this namespace, you register service names that correspond to your ECS services. Each service can then be discovered through its logical name, streamlining the interaction between different components of your application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practical Example: Integrating Service Discovery with ECS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Consider a scenario where you have a microservices architecture with a front-end service needing to communicate with a back-end service. Instead of hardcoding the back-end service's IP address, you register both services with Cloud Map under a common namespace, say &lt;code&gt;myapp.local&lt;/code&gt;. The back-end service registers itself with the name &lt;code&gt;backend.myapp.local&lt;/code&gt;. The front-end service, needing to send a request to the back-end, queries Cloud Map for &lt;code&gt;backend.myapp.local&lt;/code&gt; and receives the current IP address and port of the back-end service. This mechanism ensures that even if the back-end service is redeployed or its IP changes, the front-end can always discover and communicate with it without any manual intervention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practices for Service Discovery with ECS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automate Service Registration:&lt;/strong&gt; Ensure your ECS services are automatically registered with Cloud Map upon deployment. This can be achieved through ECS task definitions or service configurations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Health Checks:&lt;/strong&gt; Utilize Cloud Map's health checking capabilities to automatically remove unhealthy service instances from the registry. This ensures your applications always connect to operational instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Security:&lt;/strong&gt; Implement appropriate IAM policies to control access to the service discovery system, ensuring only authorized services can register or discover other services.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Service discovery is a cornerstone of modern distributed systems, ensuring applications remain resilient and adaptable to changing environments. By integrating AWS ECS with Cloud Map, developers can significantly simplify the discovery process, allowing services to dynamically interact regardless of their underlying infrastructure. This guide provides a foundation for leveraging service discovery within your ECS deployments, paving the way for more efficient and scalable applications.&lt;/p&gt;

&lt;p&gt;Incorporating service discovery into your ECS strategy not only optimizes communication between services but also enhances overall application resilience. By following the practices outlined above, you can create a robust ecosystem where services seamlessly discover and interact with each other, driving efficiency and scalability across your deployments.&lt;/p&gt;

&lt;p&gt;Thank you for reading this article! If you enjoyed it and would like to stay up to date on the latest technical articles and insights, I invite you to subscribe to my newsletter. By subscribing, you'll be the first to know when we publish new articles and you'll have access to exclusive content and resources.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsecs</category>
    </item>
    <item>
      <title>NodeJS Graceful Shutdown: A Beginner's Guide</title>
      <dc:creator>Yusuf Adeyemo</dc:creator>
      <pubDate>Tue, 23 May 2023 13:52:58 +0000</pubDate>
      <link>https://dev.to/yusadolat/nodejs-graceful-shutdown-a-beginners-guide-40b6</link>
      <guid>https://dev.to/yusadolat/nodejs-graceful-shutdown-a-beginners-guide-40b6</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpit2ixowmo99g65qug9t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpit2ixowmo99g65qug9t.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Imagine this scenario: Your Node.js application is happily running, processing requests, interacting with databases, and then suddenly, it gets terminated. The system administrator decided it was time to scale down, or perhaps a critical error forced the application to exit. In any case, the application was in the middle of processing requests, writing data to a file, and now all of that is abruptly stopped. What happens to that data? What happens to your users' requests? The consequences of an abrupt shutdown can range from minor inconveniences to significant data loss, and degraded user experience. To avoid these situations, it is important to shut down your applications gracefully. In this article, we'll discuss why graceful shutdowns are important, how to handle them in Node.js applications, particularly in the context of Docker, and the potential issues that could arise if not handled correctly.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prerequisites&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before proceeding, you should have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Basic knowledge of JavaScript and Node.js&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Understanding of Express.js framework&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Familiarity with Docker and its basic commands&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;What is a Graceful Shutdown?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;A graceful shutdown involves carefully handling the shutdown signal, completing the in-progress tasks, closing the active connections, and then finally allowing the application to terminate. This ensures that the system resources are properly freed and that the application does not exit while it's in the middle of important tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why is a Graceful Shutdown Important?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Handling shutdown signals in your applications allows you to manage resources properly, provide a better user experience, and help your system degrade more gracefully. Not handling these signals can lead to issues like data loss or corruption, incomplete transactions, resource leaks, and unexpected behavior.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Implementing Graceful Shutdown in Node.js&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In this section, we'll walk through the code required to listen for shutdown signals in a Node.js application and how to perform cleanup tasks before allowing the application to exit.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Listening for Shutdown Signals&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In Node.js, we can listen to process-level signals, such as &lt;code&gt;SIGINT&lt;/code&gt; and &lt;code&gt;SIGTERM&lt;/code&gt;. These signals are emitted when the process is requested to shut down, whether by manual user interruption (&lt;code&gt;SIGINT&lt;/code&gt; from Ctrl+C) or system-level termination (&lt;code&gt;SIGTERM&lt;/code&gt; from Docker or another process manager). To listen for these signals, we can use the &lt;code&gt;process.on&lt;/code&gt; method and provide a callback function that will be executed when these signals are received.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

process.on('SIGTERM', () =&amp;gt; {
  console.log('SIGTERM signal received.');
});

process.on('SIGINT', () =&amp;gt; {
  console.log('SIGINT signal received.');
});



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Performing Cleanup&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Once a shutdown signal is received, it's important to perform necessary cleanup tasks to close any open resources, finish transactions, and prepare the application for a graceful exit. This may involve closing database connections, completing any in-progress writes to file systems, or other application-specific cleanup. This cleanup code should be placed inside the callback function provided to &lt;code&gt;process.on&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

process.on('SIGTERM', () =&amp;gt; {
  console.log('SIGTERM signal received.');
  // Perform cleanup tasks here
});

process.on('SIGINT', () =&amp;gt; {
  console.log('SIGINT signal received.');
  // Perform cleanup tasks here
});



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Remember to handle asynchronous cleanup tasks correctly. If a cleanup task is asynchronous (like closing a database connection), you'll need to handle it with async/await or Promises to ensure it completes before the process exits.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Exiting the Process&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After performing the necessary cleanup tasks, we must manually terminate the Node.js process by calling &lt;code&gt;process.exit()&lt;/code&gt;. This signals to the system (or Docker) that our application has finished shutting down. We can provide an exit code to this method; a code of 0 indicates a successful exit, while any other number indicates an error occurred. Typically, if we've handled everything correctly in our cleanup, we'll want to exit with a code of 0.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

process.on('SIGTERM', () =&amp;gt; {
  console.log('SIGTERM signal received.');
  // Perform cleanup tasks here

  process.exit(0);
});

process.on('SIGINT', () =&amp;gt; {
  console.log('SIGINT signal received.');
  // Perform cleanup tasks here

  process.exit(0);
});



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Remember, the goal of all this is to allow your application to exit gracefully when it receives a shutdown signal. This helps reduce the risk of data corruption, loss of data, and other issues associated with an abrupt termination.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Handling Shutdown in Express.js Application&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Express.js applications, in particular, have some unique considerations when it comes to graceful shutdowns. This section discusses how to handle shutdown signals in an Express.js application, including how to stop the server from accepting new connections and how to ensure all existing connections are closed before shutdown.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Creating and Starting the Server&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;First, we need to create an Express.js application and start a server. Once the server is created, we can use it to close existing connections when we're ready to shut down the application.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

import express from 'express';

const app = express();
const server = app.listen(3000, () =&amp;gt; {
  console.log('Server listening on port 3000');
});



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Listening for Shutdown Signals&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Just like in a basic Node.js application, we need to listen for &lt;code&gt;SIGINT&lt;/code&gt; and &lt;code&gt;SIGTERM&lt;/code&gt; signals. We can use the &lt;code&gt;process.on&lt;/code&gt; method to add listeners for these signals. Inside the callback function for each listener, we'll call &lt;code&gt;server.close()&lt;/code&gt; to stop the server from accepting new connections and to begin the process of shutting down.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

process.on('SIGTERM', () =&amp;gt; {
  console.log('SIGTERM signal received.');
  server.close(() =&amp;gt; {
    console.log('Closed out remaining connections');
    // Additional cleanup tasks go here
  });
});

process.on('SIGINT', () =&amp;gt; {
  console.log('SIGINT signal received.');
  server.close(() =&amp;gt; {
    console.log('Closed out remaining connections');
    // Additional cleanup tasks go here
  });
});



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Closing Existing Connections&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When &lt;code&gt;server.close()&lt;/code&gt; is called, the server stops accepting new connections and waits for all existing connections to close. The function that we pass to &lt;code&gt;server.close()&lt;/code&gt; will be called once all connections are closed. This is where we can perform any additional cleanup tasks that need to happen before the application shuts down.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Performing Additional Cleanup&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Depending on the needs of your application, you may have additional cleanup tasks that need to happen when your application shuts down. For example, if you have a database connection, you should close it before your application exits. This cleanup code should be placed inside the callback function that you pass to &lt;code&gt;server.close()&lt;/code&gt;.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

process.on('SIGTERM', () =&amp;gt; {
  console.log('SIGTERM signal received.');
  server.close(() =&amp;gt; {
    console.log('Closed out remaining connections');
    // Additional cleanup tasks go here, e.g., close database connection
    process.exit(0);
  });
});

process.on('SIGINT', () =&amp;gt; {
  console.log('SIGINT signal received.');
  server.close(() =&amp;gt; {
    console.log('Closed out remaining connections');
    // Additional cleanup tasks go here, e.g., close database connection
    process.exit(0);
  });
});



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Handling Shutdown in a Dockerized Node.js Application&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When running a Node.js application in a Docker container, there are additional considerations to take into account. This section discusses how Docker sends shutdown signals and how to ensure your Node.js application can handle them correctly.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Understanding Docker Shutdown Signals&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When Docker is asked to stop a running container, it sends a &lt;code&gt;SIGTERM&lt;/code&gt; signal to the main process running in the container. This is Docker's way of asking the process to shut down gracefully, by finishing what it's currently doing, cleaning up as needed, and then terminating.&lt;/p&gt;

&lt;p&gt;However, if the process does not terminate within a certain period (10 seconds by default), Docker will then send a &lt;code&gt;SIGKILL&lt;/code&gt; signal to forcibly terminate the process. This is akin to pulling the plug on the application - it won't have a chance to finish what it's doing or clean up.&lt;/p&gt;

&lt;p&gt;This is why our Node.js application needs to listen for and handle the &lt;code&gt;SIGTERM&lt;/code&gt; signal, as we discussed in previous sections. By handling &lt;code&gt;SIGTERM&lt;/code&gt;, our application can ensure it shuts down gracefully when Docker asks it to stop.&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Adjusting Docker's Grace Period&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Sometimes, our application may need more than 10 seconds to shut down gracefully. For example, it might need to finish processing a long-running request, or it might need to wait for a database transaction to commit.&lt;/p&gt;

&lt;p&gt;In such cases, we can tell Docker to wait longer before it sends the &lt;code&gt;SIGKILL&lt;/code&gt; signal, by using the &lt;code&gt;--stop-timeout&lt;/code&gt; option when we run our Docker container. This option takes several seconds as its argument.&lt;/p&gt;

&lt;p&gt;For example, to start a Docker container and give it 30 seconds to shut down gracefully before forcibly killing it, we would use a command like this:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

docker run --stop-timeout 30 my-nodejs-app



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Keep in mind that while extending the stop timeout can help in some situations, it's not a panacea. If your application consistently takes a long time to shut down, it may be a sign that it needs to be optimized or refactored.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Handling shutdown signals in your Node.js applications allows you to manage resources properly, reduce potential data loss or corruption, provide a better user experience, and more. By understanding how to handle these signals, you can make your applications more robust and reliable, both in development and in production.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Do Not Tolerate Flaky Tests. Fix Them (or Delete Them).</title>
      <dc:creator>Yusuf Adeyemo</dc:creator>
      <pubDate>Mon, 17 Apr 2023 09:04:54 +0000</pubDate>
      <link>https://dev.to/yusadolat/do-not-tolerate-flaky-tests-fix-them-or-delete-them-1d03</link>
      <guid>https://dev.to/yusadolat/do-not-tolerate-flaky-tests-fix-them-or-delete-them-1d03</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--leHy-t0g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/43ssqxc12wniu7dy0phw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--leHy-t0g--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/43ssqxc12wniu7dy0phw.png" alt="Image description" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a DevOps engineer, you know the importance of testing in ensuring the quality and reliability of your software. However, not all tests are created equal, and some tests are more reliable than others. Flaky tests are tests that fail intermittently, even though the code being tested is not broken. These tests can be frustrating to deal with and can waste valuable time and resources. In this article, we will discuss why you should not tolerate flaky tests and what you can do to fix or delete them.&lt;/p&gt;

&lt;p&gt;Why You Should Not Tolerate Flaky Tests&lt;/p&gt;

&lt;p&gt;Flaky tests can cause several problems that affect the quality and reliability of your software. Here are some reasons why you should not tolerate flaky tests:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Flaky tests can make it difficult to identify real bugs: Flaky tests can mask real bugs in your code, making it difficult to identify and fix them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flaky tests can waste valuable time and resources: Flaky tests can consume valuable time and resources that could be better spent on other tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flaky tests can erode confidence in your tests: Flaky tests can erode confidence in your tests and make it difficult to trust the results.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flaky tests can lead to false positives: Flaky tests can lead to false positives, which can cause unnecessary rework and delays.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;What You Can Do to Fix or Delete Flaky Tests&lt;/p&gt;

&lt;p&gt;Fixing or deleting flaky tests can help you avoid the problems caused by flaky tests. Here are some things you can do to fix or delete flaky tests:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Identify the root cause of the flakiness: To fix flaky tests, you need to identify the root cause of the flakiness. This can be done by analyzing the test results and identifying patterns.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Fix the root cause of the flakiness: Once you have identified the root cause of the flakiness, you can fix it. This may involve modifying the test code, the test environment, or the application code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Delete the flaky tests: If you are unable to fix the root cause of the flakiness, you may need to delete the flaky tests. This can help you avoid the problems caused by flaky tests.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prioritize fixing flaky tests: Fixing flaky tests should be a priority. You should allocate the necessary time and resources to fix or delete flaky tests.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Conclusion&lt;/p&gt;

&lt;p&gt;Flaky tests can cause several problems that affect the quality and reliability of your software. To avoid these problems, you should not tolerate flaky tests and should fix or delete them as soon as possible. By identifying the root cause of the flakiness and fixing it or deleting the flaky tests, you can improve the reliability and quality of your software. Remember, a reliable and trustworthy test suite is crucial for the success of your DevOps pipeline.&lt;/p&gt;

&lt;p&gt;I hope this article has been helpful in understanding why you should not tolerate flaky tests and what you can do to fix or delete them. If you have any questions or comments, feel free to leave them below.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to resolve AWS S3 CORS error</title>
      <dc:creator>Yusuf Adeyemo</dc:creator>
      <pubDate>Fri, 31 Mar 2023 07:40:33 +0000</pubDate>
      <link>https://dev.to/yusadolat/how-to-resolve-aws-s3-cors-error-1a5b</link>
      <guid>https://dev.to/yusadolat/how-to-resolve-aws-s3-cors-error-1a5b</guid>
      <description>&lt;p&gt;The error message you're seeing is due to the Cross-Origin Resource Sharing (CORS) policy on your AWS S3 bucket. This policy determines who can access your bucket's contents from a different domain. In my case, it seems the policy is not allowing the local server (&lt;a href="http://localhost:3001"&gt;http://localhost:3001&lt;/a&gt;) to access the resources.&lt;/p&gt;

&lt;p&gt;To resolve this issue, you need to update the CORS policy for your S3 bucket. Here's a step-by-step guide:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Sign in to the AWS Management Console and open the Amazon S3 console at &lt;a href="https://console.aws.amazon.com/s3/"&gt;https://console.aws.amazon.com/s3/&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the bucket list, choose the name of the bucket that you want to add a CORS policy to.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose the 'Permissions' tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scroll down to the 'Cross-origin resource sharing (CORS)' section and choose 'Edit'.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the CORS configuration editor, add a new CORS rule. For example:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
    {
        "AllowedHeaders": ["*"],
        "AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
        "AllowedOrigins": ["http://localhost:3001"],
        "ExposeHeaders": []
    }
]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Choose 'Save'.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This policy allows your local server (&lt;a href="http://localhost:3001"&gt;http://localhost:3001&lt;/a&gt;) to perform GET, PUT, POST, and DELETE operations on your S3 bucket. Please adjust the policy according to your needs.&lt;/p&gt;

&lt;p&gt;Remember, CORS policies can pose a security risk if not configured properly. Only allow access to trusted domains and use the strictest settings that your application allows.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
    </item>
  </channel>
</rss>
