<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Ogbeide Godstime Osemenkhian</title>
    <description>The latest articles on DEV Community by Ogbeide Godstime Osemenkhian (@gtogbes).</description>
    <link>https://dev.to/gtogbes</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gtogbes"/>
    <language>en</language>
    <item>
      <title>The OpenSearch Outage You Can't Fix: Why 2-Node Clusters Always Fail.</title>
      <dc:creator>Ogbeide Godstime Osemenkhian</dc:creator>
      <pubDate>Wed, 24 Dec 2025 12:37:07 +0000</pubDate>
      <link>https://dev.to/aws-builders/avoid-this-costly-aws-opensearch-mistake-the-complete-guide-to-quorum-loss-77j</link>
      <guid>https://dev.to/aws-builders/avoid-this-costly-aws-opensearch-mistake-the-complete-guide-to-quorum-loss-77j</guid>
      <description>&lt;h2&gt;
  
  
  The Silent Cluster Killer: What Happens When Your Search Engine Just Stops
&lt;/h2&gt;

&lt;p&gt;Imagine this: it's 3 AM, your alerts start firing, and your application's search functionality is completely down. Your Amazon OpenSearch dashboard shows a hauntingly empty metrics screen. You try to restart nodes, but nothing responds. Your cluster isn't just unhealthy, it's brain-dead. This is quorum loss, and it's every OpenSearch administrator's nightmare scenario.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Exactly Is Quorum Loss (And Why Should You Care)?
&lt;/h2&gt;

&lt;p&gt;Quorum loss occurs when your OpenSearch cluster can't maintain enough master-eligible nodes to make decisions. Think of it like a committee that needs a majority vote to function, but too many members have left the room. The cluster becomes completely paralyzed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Search and indexing operations halt immediately&lt;/li&gt;
&lt;li&gt;CloudWatch metrics disappear as if your cluster never existed&lt;/li&gt;
&lt;li&gt;All administrative API calls fail&lt;/li&gt;
&lt;li&gt;The console shows "Processing" indefinitely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But here's what makes this particularly dangerous: &lt;strong&gt;Once quorum loss occurs, you might be lucky to update the cluster without it getting stuck, but in most cases, it gets stuck, and you cannot fix it yourself.&lt;/strong&gt; Standard restarts won't work. Only AWS Support can perform the specialized backend intervention needed to revive your cluster, and this process typically takes &lt;strong&gt;24-72 hours of complete downtime&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Root Cause: Why Your Two-Node Cluster Is a Time Bomb
&lt;/h2&gt;

&lt;p&gt;The most common path to quorum loss begins with a seemingly reasonable decision: running a two-node cluster to save costs. Here's the fatal math:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Quorum requires a majority of master-eligible nodes. With 2 nodes, you need N/2 + 1 = 2 nodes present. If just one node fails, the remaining node cannot reach quorum (1 out of 2 isn't a majority). Your cluster is now in a deadlock, unable to elect a leader, unable to make decisions, and completely stuck.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This isn't just theoretical. AWS explicitly warns against this configuration because it violates a fundamental distributed systems principle: &lt;strong&gt;always use an odd number of master nodes&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your Recovery Playbook: What to Do When Disaster Strikes
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Step 1: Recognize the Symptoms Immediately
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;CloudWatch metrics suddenly stop (no gradual decline, just complete silence)&lt;/li&gt;
&lt;li&gt;Cluster health API returns no response or times out&lt;/li&gt;
&lt;li&gt;Dashboard shows "Processing" with no change for hours&lt;/li&gt;
&lt;li&gt;Application search/logging features completely fail&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Contact AWS Support (Your Only Option)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Open a &lt;strong&gt;HIGH severity&lt;/strong&gt; support case immediately&lt;/li&gt;
&lt;li&gt;Clearly state: "OpenSearch cluster has lost quorum and requires backend node restart."&lt;/li&gt;
&lt;li&gt;Provide: Domain name, AWS region, and approximate failure time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Do not attempt console restarts&lt;/strong&gt; they won't work and may complicate recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Prepare for the Recovery Process
&lt;/h3&gt;

&lt;p&gt;AWS Support will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use internal tools to identify stuck nodes&lt;/li&gt;
&lt;li&gt;Safely terminate problematic nodes at the infrastructure level&lt;/li&gt;
&lt;li&gt;Restart the cluster with proper initialization&lt;/li&gt;
&lt;li&gt;Verify health restoration and data integrity&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Critical reality check:&lt;/strong&gt; During this entire process, your cluster will be completely unavailable. This is why prevention isn't just better, it's essential.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Prevention Blueprint: Architecting for Resilience
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Master Node Configuration: The Non-Negotiable Rule
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NEVER USE:      ALWAYS USE:
- 1 master      - 3 masters (minimum for production)
- 2 masters     - 5 masters (for larger clusters)
- 4 masters     - Any ODD number (3, 5, 7, etc.)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Why odd numbers matter:&lt;/strong&gt; With 3 master nodes, the cluster can lose 1 node and still maintain quorum (2 out of 3 is a majority). With 5 masters, it can withstand 2 failures. This is the foundation of high availability.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Dedicated Master Nodes: Separation of Concerns
&lt;/h3&gt;

&lt;p&gt;Dedicated masters handle only cluster management tasks, not your data or queries. This separation prevents resource contention during peak loads and ensures stable elections.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Production minimum:&lt;/strong&gt; 3 dedicated master nodes using instances like &lt;code&gt;m6g.medium.search&lt;/code&gt; or &lt;code&gt;c6g.medium.search&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Multi-AZ Deployment: Surviving Availability Zone Failures
&lt;/h3&gt;

&lt;p&gt;Deploy your master nodes across three different Availability Zones. This ensures that even if an entire AZ goes down, your cluster maintains quorum and continues operating.&lt;/p&gt;

&lt;h2&gt;
  
  
  Production-Grade Configuration Examples
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Option A: Cost-Optimized Production Setup (Recommended Baseline)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Terraform configuration for resilient OpenSearch&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_opensearch_domain"&lt;/span&gt; &lt;span class="s2"&gt;"production"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;cluster_config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;instance_type&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"m6g.medium.search"&lt;/span&gt;  &lt;span class="c1"&gt;# Graviton for price-performance&lt;/span&gt;
    &lt;span class="nx"&gt;instance_count&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;                    &lt;span class="c1"&gt;# 3 data nodes&lt;/span&gt;
    &lt;span class="nx"&gt;dedicated_master_enabled&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;master_instance_type&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"m6g.medium.search"&lt;/span&gt;  &lt;span class="c1"&gt;# Same as data nodes&lt;/span&gt;
    &lt;span class="nx"&gt;master_instance_count&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;                    &lt;span class="c1"&gt;# 3 dedicated masters&lt;/span&gt;
    &lt;span class="nx"&gt;zone_awareness_enabled&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;availability_zone_count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;                   &lt;span class="c1"&gt;# Spread across 3 AZs&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Option B: Development/Test Environment (Understanding the Trade-offs)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="c1"&gt;# For NON-PRODUCTION workloads only&lt;/span&gt;
&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_opensearch_domain"&lt;/span&gt; &lt;span class="s2"&gt;"development"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;cluster_config&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;instance_type&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"t3.small.search"&lt;/span&gt;     &lt;span class="c1"&gt;# Burstable instance&lt;/span&gt;
    &lt;span class="nx"&gt;instance_count&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;                    
    &lt;span class="nx"&gt;zone_awareness_enabled&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;               &lt;span class="c1"&gt;# Single AZ&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Critical clarification on T3 instances:
&lt;/h3&gt;

&lt;p&gt;While T3 instances (&lt;code&gt;t3.small.search&lt;/code&gt;, &lt;code&gt;t3.medium.search&lt;/code&gt;) offer lower costs, they come with significant limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cannot be used with Multi-AZ with Standby (the highest availability tier)&lt;/li&gt;
&lt;li&gt;Not recommended for production workloads by AWS&lt;/li&gt;
&lt;li&gt;Best suited for development, testing, or very low-traffic applications&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cost vs. Risk: The Business Reality
&lt;/h2&gt;

&lt;p&gt;Let's be brutally honest about the financial implications:&lt;/p&gt;

&lt;h3&gt;
  
  
  The "Savings" Trap:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2-node cluster: ~$100/month
Risk: Complete outage requiring AWS Support
Downtime: 24-72 hours
Business impact: Lost revenue, engineering panic, customer trust erosion
True cost: $100 + (72 hours of outage impact)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Resilient Investment:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;3 master + 3 data nodes: ~$300/month
Risk: Automatic failover, continuous availability
Downtime: Minutes during AZ failure (if properly configured)
Business impact: Minimal, transparent to users
True cost: $300 + (peace of mind)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The math becomes obvious when you consider that just one hour of complete search unavailability for a customer-facing application can cost thousands in lost revenue and damage to brand reputation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Your Actionable Checklist
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Immediate Actions
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Audit your current OpenSearch clusters; identify any with 1 or 2 master nodes&lt;/li&gt;
&lt;li&gt;Review your CloudWatch alarms; ensure you're monitoring &lt;code&gt;ClusterStatus.red&lt;/code&gt; and &lt;code&gt;MasterReachableFromNode.&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Document your recovery contacts, know exactly how to open a high severity AWS Support case&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Medium-Term Planning
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Test your snapshot restoration process; regularly validate backups&lt;/li&gt;
&lt;li&gt;Implement Infrastructure as Code using Terraform or CloudFormation for all changes&lt;/li&gt;
&lt;li&gt;Schedule maintenance windows for any configuration changes&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Long-Term Strategy
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Migrate to 3+ dedicated master nodes during your next maintenance window&lt;/li&gt;
&lt;li&gt;Enable Multi-AZ deployment for production workloads&lt;/li&gt;
&lt;li&gt;Consider Reserved Instances for predictable costs (30-50% savings)&lt;/li&gt;
&lt;li&gt;Evaluate OpenSearch Serverless for variable workloads&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Quorum loss isn't a hypothetical concern; it's a predictable failure mode of improper OpenSearch architecture. The recovery process is painful, lengthy, and entirely dependent on AWS Support.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The solution is simple but non-negotiable:&lt;/strong&gt; Always deploy with 3 or more nodes across multiple Availability Zones. The additional few hundred dollars per month isn't an expense; it's insurance against catastrophic failure.&lt;/p&gt;

&lt;p&gt;Your search infrastructure is the backbone of modern applications. Don't let a preventable configuration error become your next production incident. Architect for resilience from day one.&lt;/p&gt;




&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/best-practices.html" rel="noopener noreferrer"&gt;AWS OpenSearch Service Best Practices&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/database/get-started-with-amazon-elasticsearch-service-use-dedicated-master-instances-to-improve-cluster-stability/" rel="noopener noreferrer"&gt;OpenSearch Cluster Stability Guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/support/plans/" rel="noopener noreferrer"&gt;AWS Support Plans&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Have you experienced quorum loss in your OpenSearch clusters? Share your recovery stories in the comments below. Let's help the community learn from our collective experiences.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>elasticsearch</category>
      <category>devops</category>
      <category>ai</category>
    </item>
    <item>
      <title>Xray and Adot</title>
      <dc:creator>Ogbeide Godstime Osemenkhian</dc:creator>
      <pubDate>Fri, 22 Aug 2025 09:02:02 +0000</pubDate>
      <link>https://dev.to/gtogbes/xray-and-adot-31kj</link>
      <guid>https://dev.to/gtogbes/xray-and-adot-31kj</guid>
      <description></description>
    </item>
    <item>
      <title>Building Event-Driven Microservices: My LocalStack Journey</title>
      <dc:creator>Ogbeide Godstime Osemenkhian</dc:creator>
      <pubDate>Fri, 04 Jul 2025 15:53:58 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-event-driven-microservices-my-localstack-journey-439m</link>
      <guid>https://dev.to/aws-builders/building-event-driven-microservices-my-localstack-journey-439m</guid>
      <description>&lt;p&gt;In this blog, you will discover how LocalStack makes developing Lambda-based microservices effortless. Test event-driven architectures locally with S3, SQS, and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Microservices Development Challenge
&lt;/h2&gt;

&lt;p&gt;Last month, I was deep into building an event-driven microservices architecture using AWS Lambda. Picture this: file uploads triggering processing workflows, messages flowing between services, notifications being sent - the whole nine yards of modern serverless architecture.&lt;/p&gt;

&lt;p&gt;The problem? Testing this locally was nearly impossible. I was constantly deploying to AWS just to see if my Lambda functions would trigger correctly, if my SQS messages were being processed, or if my S3 events were firing as expected. The feedback loop was painfully slow, and debugging felt like shooting in the dark.&lt;/p&gt;

&lt;p&gt;That's when I discovered LocalStack, and it completely revolutionized how I develop event-driven systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is LocalStack? (And Why Microservices Developers Love It)
&lt;/h2&gt;

&lt;p&gt;Imagine having your own personal AWS cloud running right on your laptop. That's essentially what LocalStack is - a clever piece of software that mimics AWS services locally, perfect for testing complex event-driven architectures.&lt;/p&gt;

&lt;p&gt;I was skeptical at first. "There's no way Lambda functions will actually trigger from S3 events locally," I thought. However, after using it for several months, I can confidently say it has transformed how I build microservices.&lt;/p&gt;

&lt;p&gt;Here's what changed for my development workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Testing became instant instead of waiting for deployments&lt;/li&gt;
&lt;li&gt;I could develop complex event flows offline&lt;/li&gt;
&lt;li&gt;Debugging Lambda functions has become as easy as debugging any local code&lt;/li&gt;
&lt;li&gt;No more "deploy and pray" development cycles&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Getting Started: My First LocalStack Setup
&lt;/h2&gt;

&lt;p&gt;Setting up LocalStack was surprisingly straightforward. I remember being intimidated by the documentation at first, but it's just a Docker container that pretends to be AWS.&lt;/p&gt;

&lt;p&gt;Here's the Docker Compose file I use (and yes, it is this simple):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;3.8'&lt;/span&gt;

&lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;localstack&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;container_name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-local-aws&lt;/span&gt;
    &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;localstack/localstack:4&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;SERVICES&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;s3,sqs,sns,ses&lt;/span&gt;  &lt;span class="c1"&gt;# Start with just what you need&lt;/span&gt;
      &lt;span class="na"&gt;PERSISTENCE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;            &lt;span class="c1"&gt;# Keep data between restarts&lt;/span&gt;
      &lt;span class="na"&gt;DEBUG&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;                  &lt;span class="c1"&gt;# Helpful for learning&lt;/span&gt;

    &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4566:4566"&lt;/span&gt;  &lt;span class="c1"&gt;# The magic port where AWS lives locally&lt;/span&gt;

    &lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;localstack-data:/var/lib/localstack&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;/var/run/docker.sock:/var/run/docker.sock&lt;/span&gt;

&lt;span class="na"&gt;volumes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;localstack-data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first time I ran &lt;code&gt;docker-compose up&lt;/code&gt;, I felt like a kid on Christmas morning. Within 30 seconds, I had S3, SQS, and SES running on my laptop. No credit card required.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Services That Power My Event-Driven Architecture
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Lambda: The Heart of Microservices
&lt;/h3&gt;

&lt;p&gt;Lambda functions are the core of my event-driven architecture, and LocalStack makes testing them incredibly smooth. I can trigger functions from S3 events, SQS messages, or API Gateway calls - all locally.&lt;/p&gt;

&lt;p&gt;The game-changer? I can set breakpoints in my Lambda code and debug it just like any other local application. No more adding print statements and redeploying to see what's happening.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Your Lambda code works exactly the same
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Process S3 event, SQS message, etc.
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Success&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  S3: Event-Driven File Processing
&lt;/h3&gt;

&lt;p&gt;S3 events are crucial for microservices that process files. With LocalStack, I can upload a file and immediately see my Lambda function trigger, process the file, and send messages to other services.&lt;/p&gt;

&lt;p&gt;The best part? I can test different file types, sizes, and edge cases without worrying about cleanup or storage costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  SQS: Reliable Inter-Service Communication
&lt;/h3&gt;

&lt;p&gt;SQS is the backbone of my microservices communication. LocalStack lets me test message flows, dead letter queues, and retry logic locally. I can even simulate message failures to test my error handling.&lt;/p&gt;

&lt;p&gt;The magic moment was when I realized I could test my entire microservices choreography locally - watching messages flow from service to service in real-time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making LocalStack Feel Like Home
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Setup Script That Saved My Sanity
&lt;/h3&gt;

&lt;p&gt;After a few weeks of manually creating the same microservices infrastructure every time I restarted LocalStack, I got smart and wrote a setup script. Now, every time LocalStack starts, it automatically creates my entire event-driven architecture.&lt;/p&gt;

&lt;p&gt;Here's my initialization script for a typical microservices setup:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Setting up microservices infrastructure..."&lt;/span&gt;

&lt;span class="c"&gt;# Create S3 buckets for different services&lt;/span&gt;
awslocal s3 mb s3://user-uploads
awslocal s3 mb s3://processed-files
awslocal s3 mb s3://service-logs

&lt;span class="c"&gt;# Set up message queues for inter-service communication&lt;/span&gt;
awslocal sqs create-queue &lt;span class="nt"&gt;--queue-name&lt;/span&gt; user-events
awslocal sqs create-queue &lt;span class="nt"&gt;--queue-name&lt;/span&gt; file-processing
awslocal sqs create-queue &lt;span class="nt"&gt;--queue-name&lt;/span&gt; notification-queue

&lt;span class="c"&gt;# Create SNS topics for pub/sub messaging&lt;/span&gt;
awslocal sns create-topic &lt;span class="nt"&gt;--name&lt;/span&gt; user-updates
awslocal sns create-topic &lt;span class="nt"&gt;--name&lt;/span&gt; system-alerts

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Microservices infrastructure ready! 🚀"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I put this in &lt;code&gt;./initialization/aws/init-aws.sh&lt;/code&gt; and LocalStack runs it automatically when it starts. It's like having a personal assistant set up your workspace every morning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pro Tips I Learned the Hard Way
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Start Small&lt;/strong&gt;: Don't enable every AWS service on day one. I made this mistake and LocalStack took forever to start. Begin with just the services you actually use.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Persistence&lt;/strong&gt;: Add &lt;code&gt;PERSISTENCE: 1&lt;/code&gt; to your Docker Compose file. Trust me, you don't want to lose your test data every time you restart.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install awslocal&lt;/strong&gt;: This little CLI tool is a game-changer. Instead of typing &lt;code&gt;aws --endpoint-url=http://localhost:4566&lt;/code&gt;, you just type &lt;code&gt;awslocal&lt;/code&gt;. Your fingers will thank you.&lt;/p&gt;

&lt;h2&gt;
  
  
  How LocalStack Revolutionized My Microservices Testing
&lt;/h2&gt;

&lt;h3&gt;
  
  
  From "Deploy and Debug" to "Test First"
&lt;/h3&gt;

&lt;p&gt;Before LocalStack, my microservices testing strategy was basically "write unit tests for individual functions and hope the integration works in production." Testing event-driven flows was nearly impossible locally.&lt;/p&gt;

&lt;p&gt;Now I can test my entire microservices architecture locally:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Upload a file to S3 (triggers file-processing service)&lt;/li&gt;
&lt;li&gt;Lambda processes file and sends message to SQS&lt;/li&gt;
&lt;li&gt;Another Lambda picks up the SQS message&lt;/li&gt;
&lt;li&gt;Processes data and stores results in DynamoDB&lt;/li&gt;
&lt;li&gt;Publishes completion event to SNS&lt;/li&gt;
&lt;li&gt;Notification service sends email via SES&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All of this happens in seconds, and I can run it as many times as needed to perfect the flow.&lt;/p&gt;

&lt;h3&gt;
  
  
  The "Gotcha" Moments That LocalStack Catches
&lt;/h3&gt;

&lt;p&gt;Let me tell you about some tricky bugs LocalStack has helped me catch:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Event Structure Mixup&lt;/strong&gt;: I once wrote a Lambda function that expected S3 events but was receiving SQS messages. The event structure was completely different, but I only discovered this when testing the full flow locally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Permission Puzzle&lt;/strong&gt;: My Lambda function was failing silently because it didn't have permission to write to a specific S3 bucket. LocalStack's logs showed me the exact permission error immediately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Message Format Fiasco&lt;/strong&gt;: I was sending JSON messages between services, but one service was expecting XML. LocalStack let me trace the entire message flow and spot the mismatch instantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making LocalStack Lightning Fast
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Need for Speed
&lt;/h3&gt;

&lt;p&gt;When I first started using LocalStack, it felt a bit sluggish. Turns out, I was being greedy and enabling every single AWS service. Here's what I learned about making it fast:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Only Enable What You Use&lt;/strong&gt;: If you're just testing S3 and SQS, don't enable Lambda, DynamoDB, and 15 other services. Your startup time will go from 2 minutes to 20 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Persistence is Your Friend&lt;/strong&gt;: Enable persistence so your data survives container restarts. Nothing's more frustrating than losing your test setup because Docker decided to restart.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Give It Enough Memory&lt;/strong&gt;: LocalStack is doing a lot of heavy lifting. I give it at least 2GB of RAM, and it runs much smoother.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Gotchas I Wish Someone Had Told Me
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Port 4566 is Popular
&lt;/h3&gt;

&lt;p&gt;Apparently, everyone uses port 4566 for something. If LocalStack won't start, check if something else is using that port. I usually just change it to 4567 in my Docker Compose file and move on with my life.&lt;/p&gt;

&lt;h3&gt;
  
  
  Your Code Needs One Tiny Change
&lt;/h3&gt;

&lt;p&gt;The beautiful thing about LocalStack is that your existing AWS code works with almost no changes. You just need to point it to &lt;code&gt;localhost:4566&lt;/code&gt; instead of the real AWS endpoints:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Production
&lt;/span&gt;&lt;span class="n"&gt;s3_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s3&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# LocalStack (just add endpoint_url)
&lt;/span&gt;&lt;span class="n"&gt;s3_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s3&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;endpoint_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;http://localhost:4566&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Lambda Functions Can Be Tricky
&lt;/h3&gt;

&lt;p&gt;If you're testing Lambda functions, they need Docker to run inside LocalStack. Make sure you mount the Docker socket in your compose file, or your Lambdas will fail silently (and you'll spend hours debugging like I did).&lt;/p&gt;

&lt;h2&gt;
  
  
  When Things Go Wrong (And They Will)
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Debugging Like a Detective
&lt;/h3&gt;

&lt;p&gt;LocalStack is pretty reliable, but when something goes wrong, here's how I debug:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check the Health&lt;/strong&gt;: &lt;code&gt;curl http://localhost:4566/_localstack/health&lt;/code&gt; tells you which services are running. If S3 shows as "disabled" when you expect it to be "available," you know where to start looking.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2668rihhnfgj17izmi73.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2668rihhnfgj17izmi73.png" alt="Image description" width="800" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read the Logs&lt;/strong&gt;: &lt;code&gt;docker logs my-local-aws -f&lt;/code&gt; is your best friend. LocalStack is pretty chatty about what it's doing, especially with DEBUG mode enabled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;List Your Resources&lt;/strong&gt;: Sometimes I forget what I've created. &lt;code&gt;awslocal s3 ls&lt;/code&gt; or &lt;code&gt;awslocal sqs list-queues&lt;/code&gt; helps me remember what's actually there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Development Speed Boost You'll Love
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Instant Feedback Loops&lt;/strong&gt;: Testing microservices interactions used to take minutes (deploy, test, debug, repeat). Now it takes seconds. I can iterate on my event-driven architecture as fast as I can think.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Complete Offline Development&lt;/strong&gt;: I can develop my entire microservices stack on a plane, in a coffee shop, or anywhere without internet. The whole AWS ecosystem runs on my laptop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fearless Experimentation&lt;/strong&gt;: Want to try a new event pattern? Test a different message format? Experiment with Lambda triggers? Go ahead! There's no deployment overhead or cleanup required.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real Integration Testing&lt;/strong&gt;: I can test the actual integration between services, not just mock it. This catches so many issues that unit tests miss.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making the Transition Smooth
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Environment Switch Pattern
&lt;/h3&gt;

&lt;p&gt;Here's the pattern I use to seamlessly switch between LocalStack and real AWS:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_aws_client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getenv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ENVIRONMENT&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;local&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;endpoint_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;http://localhost:4566&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;aws_access_key_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;test&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;aws_secret_access_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;test&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;service&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Uses real AWS credentials
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now I can develop locally and deploy to production with the same code. Just change an environment variable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Keeping Your Data Safe
&lt;/h3&gt;

&lt;p&gt;One thing I learned the hard way: back up your LocalStack data if you're working on something important. I once lost a week's worth of test data when I accidentally deleted my Docker volume.&lt;/p&gt;

&lt;p&gt;Now I occasionally run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;cp &lt;/span&gt;my-local-aws:/var/lib/localstack ./backup
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Just in case.&lt;/p&gt;

&lt;h2&gt;
  
  
  When LocalStack Misbehaves
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The "Turn It Off and On Again" Solution
&lt;/h3&gt;

&lt;p&gt;90% of LocalStack issues can be solved with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker-compose down &lt;span class="nt"&gt;-v&lt;/span&gt;
docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;-v&lt;/code&gt; flag removes volumes, giving you a completely fresh start. Sometimes LocalStack just needs a clean slate.&lt;/p&gt;

&lt;h3&gt;
  
  
  The "It Was Working Yesterday" Problem
&lt;/h3&gt;

&lt;p&gt;If LocalStack suddenly stops working, check if Docker is running out of space:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker system &lt;span class="nb"&gt;df&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you're low on space, clean up with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker system prune
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The Lambda Mystery
&lt;/h3&gt;

&lt;p&gt;Lambda functions in LocalStack can be finicky. If they're not working, make sure you've mounted the Docker socket in your compose file. Lambda needs Docker to run, and without that mount, it fails silently.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Every Microservices Developer Should Try LocalStack
&lt;/h2&gt;

&lt;p&gt;LocalStack isn't just a tool - it's a development philosophy. It's about building and testing your entire system locally before ever touching the cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  Your Next Steps
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Start with Lambda + SQS&lt;/strong&gt;: Copy my Docker Compose file and create a simple event-driven flow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add S3 Events&lt;/strong&gt;: Test file upload triggers and processing workflows&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build Your Architecture&lt;/strong&gt;: Gradually add more services as your microservices grow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Share Your Experience&lt;/strong&gt;: I'd love to hear how LocalStack changes your microservices development&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Bottom Line
&lt;/h3&gt;

&lt;p&gt;If you're building event-driven microservices with AWS, LocalStack will transform your development experience. It's the difference between hoping your services work together and knowing they do.&lt;/p&gt;

&lt;p&gt;Trust me, once you experience the joy of testing complex event flows locally, you'll never go back to deploy-and-pray development.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have you tried LocalStack? What's been your experience? Drop a comment below - I'd love to hear your stories!&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Helpful Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://docs.localstack.cloud/" rel="noopener noreferrer"&gt;LocalStack Documentation&lt;/a&gt; - The official docs&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/localstack/awscli-local" rel="noopener noreferrer"&gt;awslocal CLI&lt;/a&gt; - Makes your life easier&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://localstack.cloud/slack" rel="noopener noreferrer"&gt;LocalStack Community&lt;/a&gt; - Great place to get help&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>microservices</category>
      <category>lambda</category>
      <category>eventdriven</category>
      <category>aws</category>
    </item>
    <item>
      <title>DLQ Redrive for Amazon SQS</title>
      <dc:creator>Ogbeide Godstime Osemenkhian</dc:creator>
      <pubDate>Sat, 18 Jan 2025 23:31:22 +0000</pubDate>
      <link>https://dev.to/aws-builders/dlq-redrive-for-amazon-sqs-5dkm</link>
      <guid>https://dev.to/aws-builders/dlq-redrive-for-amazon-sqs-5dkm</guid>
      <description>&lt;h2&gt;
  
  
  SQS
&lt;/h2&gt;

&lt;p&gt;Amazon Simple Queue Service (SQS) is a fully managed messaging service that helps decouple application components and manage message queues efficiently. &lt;br&gt;
While SQS ensures reliable message delivery, there are cases where messages fail to be processed successfully. DLQ (Dead Letter Queue) Redrive is critical in handling such cases effectively.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is DLQ Redrive?
&lt;/h2&gt;

&lt;p&gt;A Dead Letter Queue (DLQ) is a secondary queue used to store messages that couldn’t be processed successfully by a consumer. &lt;/p&gt;
&lt;h2&gt;
  
  
  How Dead-Letter Queue Redrive Works
&lt;/h2&gt;

&lt;p&gt;Dead-letter queue (DLQ) redrive is a powerful feature in Amazon SQS that helps you manage unconsumed messages in a dead-letter queue. Instead of leaving messages stuck in the DLQ, you can use the redrive functionality to move these messages back to their source queue for another attempt at processing or redirect them to a different queue for specialized handling.&lt;/p&gt;

&lt;p&gt;By default, the DLQ redrive process moves messages from the DLQ to the original source queue. However, Amazon SQS also provides flexibility by allowing you to specify a different queue as the redrive destination. The key requirement is that the destination queue must match the type of the DLQ. For instance, if the DLQ is a FIFO queue, the destination queue must also be a FIFO queue to ensure message ordering and deduplication requirements are met.&lt;/p&gt;

&lt;p&gt;Another essential configuration option is the redrive velocity, which controls the rate at which messages are moved from the DLQ to the destination queue. This lets you balance throughput with system stability, ensuring the destination queue isn’t overwhelmed with a sudden influx of messages.&lt;/p&gt;
&lt;h2&gt;
  
  
  Message Order During Redrive
&lt;/h2&gt;

&lt;p&gt;When redriving messages, Amazon SQS processes them in the order they were received in the DLQ, starting with the oldest message first. However, it’s important to note how this interacts with new messages in the destination queue. The destination queue processes all incoming messages—whether redriven from the DLQ or newly published by a producer—in the order they arrive.&lt;/p&gt;

&lt;p&gt;For example, imagine a FIFO queue receiving messages from a producer while also ingesting redriven messages from a DLQ. These two streams of messages will interweave based on their arrival timestamps, ensuring the destination queue processes messages in a consistent but mixed order.&lt;/p&gt;
&lt;h2&gt;
  
  
  Why Use DLQ Redrive?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Easy Debugging: Messages that fail repeatedly are moved to the DLQ, providing a safe space for analysis.&lt;/li&gt;
&lt;li&gt;System Resilience: By isolating problematic messages, DLQ Redrive helps maintain the stability of the main queue.&lt;/li&gt;
&lt;li&gt;Improved Visibility: Developers gain insights into recurring issues or patterns in message failures.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Setting up a DLQ in Amazon SQS
&lt;/h2&gt;

&lt;p&gt;Setting up a DLQ involves creating a primary queue and associating a secondary queue (the DLQ) with it.&lt;/p&gt;
&lt;h2&gt;
  
  
  Create a Primary Queue
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Open the Amazon SQS Console.&lt;/li&gt;
&lt;li&gt;Click Create Queue.&lt;/li&gt;
&lt;li&gt;Configure the queue settings (e.g., name, retention period, visibility timeout).&lt;/li&gt;
&lt;li&gt;Note the ARN (Amazon Resource Name) of the queue for later use.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Create a Dead Letter Queue
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create a second queue, which will serve as the DLQ.&lt;/li&gt;
&lt;li&gt;Note the ARN of the DLQ as you’ll associate it with the primary queue.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Associate the DLQ with the Primary Queue
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to the Primary Queue in the SQS Console.&lt;/li&gt;
&lt;li&gt;Under Redrive allow policy, specify:&lt;/li&gt;
&lt;li&gt;The ARN of the DLQ.&lt;/li&gt;
&lt;li&gt;The MaxReceiveCount, determines how many processing attempts a message can have before being sent to the DLQ.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Processing Messages in the DLQ
&lt;/h2&gt;

&lt;p&gt;Once messages are in the DLQ, you’ll need to handle them manually or programmatically to address the underlying issues. AWS provides several ways to process messages in a DLQ:&lt;/p&gt;
&lt;h2&gt;
  
  
  Manually Inspect Messages:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use the AWS Management Console to view the messages in the DLQ.&lt;/li&gt;
&lt;li&gt;Analyze the content for potential errors or reasons for failure.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Programmatically Retrieve Messages:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use the AWS SDK to fetch messages from the DLQ for automated inspection and reprocessing.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Automating DLQ Redrive Setup
&lt;/h2&gt;

&lt;p&gt;Manual configuration can be time-consuming and error-prone, especially when dealing with multiple queues. Automation ensures consistency across environments. Below is an example setup using Terraform.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_sqs_queue" "main-queue" {
  name = "redriveBlogQueue"
}

resource "aws_sqs_queue" "dlq" {
  name = "RedrivBlog-dlq"
  redrive_allow_policy = jsonencode({
    redrivePermission = "byQueue",
    sourceQueueArns   = [aws_sqs_queue.main-queue.arn]
  })
}

resource "aws_sqs_queue_redrive_policy" "redrive" {
  queue_url = aws_sqs_queue.main-queue.id
  redrive_policy = jsonencode({
    deadLetterTargetArn = aws_sqs_queue.dlq.arn
    maxReceiveCount     = 4
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv164utjhzypqnoq73qy9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv164utjhzypqnoq73qy9.png" alt="Redrive highlight" width="552" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
The DLQ Redrive process is essential for building resilient, fault-tolerant systems with Amazon SQS. By isolating problematic messages and automating their handling, you can ensure the stability of your application while gaining insights into recurring issues.&lt;/p&gt;

&lt;p&gt;Implementing DLQs and automating their redrive process is a best practice for any distributed system using SQS. Start small with manual setups, and scale up with automation using tools like the AWS SDK, CloudWatch, and Terraform.&lt;/p&gt;

&lt;p&gt;How are you leveraging DLQ redrives in your projects? Share your thoughts and challenges!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Next.js Deployment on AWS Lambda, ECS, Amplify, and Vercel: What I Learned</title>
      <dc:creator>Ogbeide Godstime Osemenkhian</dc:creator>
      <pubDate>Mon, 13 Jan 2025 16:16:06 +0000</pubDate>
      <link>https://dev.to/aws-builders/nextjs-deployment-on-aws-lambda-ecs-amplify-and-vercel-what-i-learned-nmc</link>
      <guid>https://dev.to/aws-builders/nextjs-deployment-on-aws-lambda-ecs-amplify-and-vercel-what-i-learned-nmc</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;Next.js, a React-based framework for building server-side rendered (SSR) applications, has gained immense popularity due to its performance and flexibility. However, selecting the right platform to deploy your Next.js application can significantly impact cost, scalability, and development experience.&lt;/p&gt;

&lt;p&gt;There are numerous platform options for deploying a Next.js application, each with unique strengths and tradeoffs. To determine the most suitable platform for different scenarios, I investigated deploying a &lt;a href="https://github.com/gtogbes/NextJs-Poc/tree/main" rel="noopener noreferrer"&gt;sample Next.js app&lt;/a&gt; on AWS Lambda, AWS ECS, AWS Amplify, and Vercel.&lt;/p&gt;

&lt;p&gt;In this blog, I’ll share my findings, deployment experiences, tradeoffs for each platform, and what I learned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Acceptance Criteria
&lt;/h2&gt;

&lt;p&gt;The following criteria guided the investigation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost: Affordability for various levels of usage.&lt;/li&gt;
&lt;li&gt;Complexity: The effort and expertise required for setup and maintenance.&lt;/li&gt;
&lt;li&gt;Ease of Deployment: How straightforward it is to deploy Next.js.&lt;/li&gt;
&lt;li&gt;Performance: Response times, handling traffic spikes, and latency.&lt;/li&gt;
&lt;li&gt;Resilience: Fault tolerance and high availability.&lt;/li&gt;
&lt;li&gt;Scalability: The platform’s ability to handle growing traffic seamlessly.&lt;/li&gt;
&lt;li&gt;Securely Accessing Private Resources: Capability to integrate with internal APIs or databases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18grea5kzvor9d5z95m7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F18grea5kzvor9d5z95m7.png" alt="Architecture considered for the deployment options" width="786" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Lambda: A Serverless Approach&lt;/strong&gt;&lt;br&gt;
AWS Lambda offers a serverless computing service where you only pay for what you use—great for cost-conscious projects. There is no server management, no upfront costs, just code execution. However, deploying a Next.js application to Lambda isn’t straightforward due to Lambda’s 250MB size limit and cold start delays.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I Learned&lt;/strong&gt;&lt;br&gt;
Next.js generates lots of files during execution, often exceeding the size limit. To address this, I used the AWS Lambda Web Adapter, which allowed me to proxy requests between the Lambda runtime and the web application without heavy code refactoring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment Process&lt;/strong&gt;&lt;br&gt;
Here’s the approach I took:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Tools:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SAM CLI&lt;/li&gt;
&lt;li&gt;AWS CLI&lt;/li&gt;
&lt;li&gt;Docker
&lt;strong&gt;Dockerize the App:&lt;/strong&gt; Include the AWS Web Adapter in your Dockerfile (this is done by copying from awsguru public ecr).
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;`COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.8.4 /lambda-adapter /opt/extensions/lambda-adapter`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;**Modify next.config.js: **Add standalone mode to the next.config.js file for compatibility&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const nextConfig = {
    output: 'standalone',
}

module.exports = nextConfig
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;SAM:&lt;/strong&gt; Finally sam was used to build, package and deploy the app on Lambda using &lt;code&gt;sam build &amp;amp;&amp;amp; sam deploy&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;For a more robust setup, CI/CD pipelines (e.g., GitHub Actions) can automate the deployment process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cold Starts: While cold starts were present, the AWS Lambda Web Adapter helped mitigate their impact.&lt;/li&gt;
&lt;li&gt;File Size Limitations: Next.js’s generated files can exceed Lambda’s 250MB limit, requiring Dockerization or optimizations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Trade-offs&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pros: Cost-effective, serverless scaling, and pay-as-you-go pricing.&lt;/li&gt;
&lt;li&gt;Cons: Cold starts and limitations on file sizes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS ECS: Containerized Application Management
&lt;/h2&gt;

&lt;p&gt;Amazon Elastic Container Service (ECS) is a fully managed container orchestration service. It allows you to deploy, manage, and scale containerized applications, either using Amazon EC2 or AWS Fargate, a serverless compute engine for containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What I Learned&lt;/strong&gt;&lt;br&gt;
Deployment to ECS requires containerizing the app and defining infrastructure components like tasks, services, and scaling configurations. While more complex than Lambda, ECS offers powerful tools for handling high traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment Process
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Containerize the App:&lt;/strong&gt; Build a Docker image for the app and push it to Amazon ECR.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Define Infrastructure:&lt;/strong&gt; Create a task definition, service, and autoscaling configurations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deploy:&lt;/strong&gt; I used Terraform for the deployment, AWS CLI or Cloudformation can also be used to create and manage resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Resilience and Scaling: ECS excels at handling traffic with proper autoscaling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Infrastructure Management: More effort is needed compared to serverless solutions like Lambda.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Trade-offs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Pros: High performance, flexible scaling, and resilience.&lt;/li&gt;
&lt;li&gt;Cons: Higher complexity and cost compared to Lambda.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS Amplify
&lt;/h2&gt;

&lt;p&gt;AWS Amplify is a fully managed service that simplifies front-end and mobile app development. It offers features like CI/CD pipelines, hosting, and backend services. Amplify is optimized for static and server-side rendered apps, making it an excellent choice for Next.js.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;Amplify abstracts away the complexity of hosting Next.js applications. With just a few clicks, I was able to connect my repository, configure build settings, and deploy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment Process
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Connect Repository: Link the GitHub repository to Amplify.&lt;/li&gt;
&lt;li&gt;Select Branch: Choose the branch to deploy.&lt;/li&gt;
&lt;li&gt;Build and Deploy: Amplify detects the framework and runs the necessary build commands (yarn install, etc.).&lt;/li&gt;
&lt;li&gt;Manage Resources: Use the Amplify UI for custom domains, redirects, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ease of Use: Amplify’s automated process was incredibly simple, making it perfect for developers without extensive AWS experience.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Scaling: Amplify handled traffic spikes well, demonstrating good scalability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Trade-offs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Pros: Low complexity, high ease of use, and seamless CI/CD.&lt;/li&gt;
&lt;li&gt;Cons: Limited support for securely accessing private resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Vercel: Next.js’s Native Home
&lt;/h2&gt;

&lt;p&gt;Vercel is the team behind Next.js, so it’s no surprise that their platform is designed to handle it flawlessly. With native SSR support, automatic scaling, and seamless integration, Vercel offers the easiest deployment experience for Next.js developers.&lt;/p&gt;

&lt;p&gt;What I Learned&lt;br&gt;
Deploying with Vercel was incredibly simple. I connected my GitHub repository, clicked Deploy, and in less than 5 minutes, my app was live on a .vercel.app domain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment Process
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Sign Up and Connect Repository: Create a Vercel account and link your GitHub repository.&lt;/li&gt;
&lt;li&gt;Deploy the App: Click Deploy, and Vercel automatically builds and deploys the app.&lt;/li&gt;
&lt;li&gt;Access the App: The app will be available on a .vercel.app domain, which can be customized.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Optimal Performance: Vercel’s integration with Next.js made it the fastest and easiest deployment platform.&lt;/li&gt;
&lt;li&gt;Cold Starts: Some initial delays were observed, likely due to serverless function cold starts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Trade-offs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Pros: Extremely easy to use, high performance, and rich features.&lt;/li&gt;
&lt;li&gt;Cons: Higher cost for advanced features and limited private resource access on lower-tier plans.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Through this investigation, I learned that each platform has its unique strengths and trade-offs. Ultimately, the best platform depends on your application’s needs, team expertise, and budget. For my use case, ECS was the best fit as it allowed more control over the infrastructure and enabled secure access to backend services using VPC.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Serverless Amazon API Gateway</title>
      <dc:creator>Ogbeide Godstime Osemenkhian</dc:creator>
      <pubDate>Sun, 12 Feb 2023 23:45:28 +0000</pubDate>
      <link>https://dev.to/aws-builders/serverless-amazon-api-gateway-57e4</link>
      <guid>https://dev.to/aws-builders/serverless-amazon-api-gateway-57e4</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post, I will take you through a high-level walkthrough of the Amazon API gateway. I will have a future post on demos and hands-on teaching on creating our own API gateway, connecting it to a server, and serverless backends, and securing our API gateways by implementation of best practices.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is an API&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An Application Programming Interface (API), acts as an entry point or a front door for applications to gain access to business logic, data, or functionalities from backend services. API design is usually seen as a client-server design, the app sending the request is seen as the client while the app sending the response is usually the server.&lt;br&gt;
APIs are mechanisms that allow communication between software parts using set protocols and definitions. An example of such communication is a request to the weather bureau’s software system and the weather App on your phone to get the weather information of a place at a particular time, and you get a response to the request sent by seeing the weather information of the place requested on your app. The popular types of API integrations are websocket APIs and REST APIs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is REST?&lt;/strong&gt;&lt;br&gt;
REST stands for Representational state transfer and is meant to be an architectural reference for developing modern and user-friendly web services.&lt;br&gt;
Instead of defining custom methods and protocols such as SOAP or WSDL, REST is based on HTTP as the transport protocol. HTTP is used to exchange textual representations of web resources across different systems, using predefined methods such as GET, POST, PUT, PATCH, DELETE, etc. JSON is its standard representation format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon API Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon API Gateway is a fully managed service that increases the flexibility for developers to create, publish, maintain, monitor, and secure APIs at any scale. API Gateway securely handles every task involved in accepting and processing up to hundreds of thousands of concurrent API calls, including access control, traffic management, CORS support, authorization, monitoring, throttling, and API version management.&lt;br&gt;
Amazon API Gateway creates RESTful APIs that are HTTP based, enable stateless client-server communication, and use standard HTTP methods such as GET, POST, PUT, PATCH, and DELETE. It also creates Websocket APIs that Adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server, and also routes incoming messages based on message content.&lt;br&gt;
Amazon API Gateway enables you to design and build RESTful interfaces and helps connect them to your application backend. With API Gateway You can have the flexibility of designing your personal resource structure, adding dynamic routing parameters, and developing custom authorization logic. Each API resource can be configured independently, while each stage can have specific cache, throttling, and logging configurations.&lt;br&gt;
API Gateway is a serverless service, which means you don't have to bother about provisioning the underlining resources. AWS provisions all the resources under the hood for you while you select all the implementation and configuration needed for your application. This helps to boost developers' productivity by ensuring that the focus is on building their code and not bothering about the underlining infrastructure.&lt;br&gt;
Amazon API Gateway can integrate with various AWS services such as Lambda, AWS load balancer, AWS WAF,  and Cloudfronts.&lt;br&gt;
With Amazon API Gateway, you can define resources, map them to custom models, specify which methods are available (i.e. GET, POST, etc.), and eventually bind each method to a particular Lambda function. Alternatively, you can attach more than one method to one single Lambda function. This way, you will maintain fewer functions and partially avoid the cold-start Lambda issues.&lt;/p&gt;

&lt;p&gt;Amazon API Gateway Architecture&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpkww4ml52vpra6yx2xj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkpkww4ml52vpra6yx2xj.png" alt="Amazon API Gateway Architecture" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gratitude</category>
      <category>career</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Moving AWS RDS from Single AZ To Multi AZ</title>
      <dc:creator>Ogbeide Godstime Osemenkhian</dc:creator>
      <pubDate>Fri, 30 Dec 2022 23:19:26 +0000</pubDate>
      <link>https://dev.to/aws-builders/moving-aws-rds-from-single-az-to-multiple-az-moi</link>
      <guid>https://dev.to/aws-builders/moving-aws-rds-from-single-az-to-multiple-az-moi</guid>
      <description>&lt;p&gt;Overview&lt;/p&gt;

&lt;p&gt;Amazon Relational Database Service (RDS) is a fully managed database service that makes it easy to set up, operate, and scale relational databases in the cloud. Each Amazon RDS instance runs on a DB instance backed by an Amazon Elastic Block Store (Amazon EBS) volume for storage.&lt;br&gt;
RDS supports seven different database engine that includes; Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL compatibility, MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server.&lt;br&gt;
For Production workload, it is very important to bake in high availability in database planning, to avoid single point of failure in the event of an availability zone failure and also to reduce latency for high write or read applications, by using read replicas in the high availability setup.&lt;br&gt;
To ensure high availability, AWS RDS supports two different methods;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;RDS Multi-AZ deployments for MySQL, MariaDB, PostgreSQL, &lt;br&gt;
Oracle, and SQL Server engines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For Amazon Aurora engines, data can be replicated six ways &lt;br&gt;
in three different availability zones.&lt;br&gt;
In this article our focus will be on RDS Multi-AZ deployment (Postgres Engine)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3konkgnl94pv2396h7ik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3konkgnl94pv2396h7ik.png" alt="Image description" width="286" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Types of RDS setup&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Single-AZ Setup: In a Single-AZ setup, one RDS DB instance and one or more EBS storage volumes are deployed in one Availability Zone across data centre. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multi-AZ setup: In a Multi-AZ configuration, RDS DB instances and EBS storage volumes are deployed across multiple Availability Zones. A multi-AZ deployment has a Master database in one AZ and a Standby (or Secondary) database in another AZ. Only the Master database serves traffic. If the Master fails, then the Secondary takes over.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multi-AZ DB Cluster: A Multi-AZ DB cluster deployment is a high-availability deployment mode of Amazon RDS with two readable standby DB instances. A Multi-AZ DB cluster has a writer DB instance and two reader DB instances in three separate Availability Zones in the same AWS Region. Multi-AZ DB clusters provide high availability, increased capacity for reading workloads, and lower write latency when compared to Multi-AZ DB instance deployments.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Benefits&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Single-AZ database deployment;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It is cost-efficient as you only pay for the database being used without bothering about the standby database&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It can function optimally and serve applications if there is no fault. It allows automatic backup as specified in the schedule.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is Ideal for the development environment as it helps save the cost of Development.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Multi-AZ database deployment;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Amazon RDS maintains a redundant and consistent standby copy of your data using synchronous storage replication.&lt;br&gt;
Amazon RDS detects and automatically recovers from the most common failure scenarios for Multi-AZ deployments so that you can resume database operations as quickly as possible without administrative intervention. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon RDS automatically performs a failover in the event of loss of availability in the primary Availability Zone, loss of network connectivity, compute unit failure, and storage failure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Having separate Availability Zones greatly reduces the likelihood that copies of database are concurrently affected by most types of disturbances and provides a highly available database architecture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There is Lower Latency with Multi-AZ deployment.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Multi-AZ Cluster database deployment;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;With a Multi-AZ DB cluster, Amazon RDS replicates data from the writer DB instance to both of the reader DB instances using the DB engine's native replication capabilities.&lt;/li&gt;
&lt;li&gt;Reader DB instances act as automatic failover targets and also serve read traffic to increase application read throughput. If an outage occurs on your writer DB instance, RDS manages failover to one of the reader DB instances.&lt;/li&gt;
&lt;li&gt;Multi-AZ DB clusters have lower write latency when compared to Multi-AZ DB instance deployments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtnlsk7rmyt3gwrwhl5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtnlsk7rmyt3gwrwhl5d.png" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Sample Multi-AZ setup&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Risks&lt;/strong&gt; &lt;br&gt;
For projects running single AZ production RDS databases, and want to consider migrating to Multi-AZ Database deployments after reading this article, some things to keep in mind and account for before carrying out your migration are listed in the multi-AZ sessions below.&lt;br&gt;
&lt;strong&gt;Single-AZ DB&lt;/strong&gt;&lt;br&gt;
It is not highly available. When there is a failure in the AZ the database is inaccessible for that period of time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-AZ DB&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Cost of operation increases because the database has double the resources and infrastructure handling the Database. Depending on the number of availability zones, the cost for two availability zones is double the cost for a single AZ.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Though this does not occur all the time, there could be a latency issue during migration.&lt;br&gt;
When Migrating RDS from a single-AZ to Multi-AZ, latency can sometimes occurs because when converting a DB instance from Single-AZ to Multi-AZ, Amazon RDS creates a snapshot of the database volumes and restores these to new volumes in a different Availability Zone. Although the newly restored volumes are available almost immediately, they don’t reach their specified performance until the underlying storage blocks are copied from the snapshot.&lt;br&gt;
Therefore, during the conversion from Single-AZ to Multi-AZ, you can experience elevated latency and performance impacts. This impact is a function of volume type, workload, instance, and volume size, and can be significant and may impact large write-intensive DB instances during the peak hours of operations.&lt;br&gt;
This latency hardly occurs because for multi-az deployment the primary database will still be operating optimally. Except there is a case of capacity overload on the primary database before the migration.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Multi-AZ Cluster&lt;/strong&gt;&lt;br&gt;
There may be an Initial Replica Lag in a multi-AZ cluster deployment. Replica lag is the difference in time between the latest transaction on the writer DB instance and the latest applied transaction on a reader DB instance.&lt;br&gt;
You can create a Multi-AZ DB cluster only with MySQL version 8.0.28 and higher 8.0 versions, and PostgreSQL version 13.4.&lt;br&gt;
You can create Multi-AZ DB clusters only in the following AWS Regions: US East (Ohio) US East (N. Virginia) US West (Oregon) Asia Pacific (Singapore) Asia Pacific (Sydney) Asia Pacific (Tokyo) Europe (Ireland) Europe (Frankfurt) Europe (Stockholm) as at the time of writing this.&lt;br&gt;
Multi-AZ DB clusters only support Provisioned IOPS storage.&lt;br&gt;
You can't change a single-AZ DB instance deployment or Multi-AZ DB instance deployment into a Multi-AZ DB cluster. As an alternative, you can restore a snapshot of a single-AZ DB instance deployment or a Multi-AZ DB instance deployment to a Multi-AZ DB cluster.&lt;br&gt;
You can't restore a Multi-AZ DB cluster snapshot to a Multi-AZ DB instance deployment or single-AZ deployment.&lt;br&gt;
Multi-AZ DB clusters don't support modifications at the DB instance level because all modifications are done at the DB cluster level.&lt;/p&gt;

&lt;p&gt;Recommendation&lt;/p&gt;

&lt;p&gt;Multi-Az deployment of a database is always the best practice to combat faults and failure and have a secure architecture in times of disaster. The production environment should always have multi-AZ in mind when deploying resources, especially databases. This would not only make it possible to still operate optimally when a region's data center goes down, but It will also enhance database accessibilities by applications as there are enough IOPS to serve the workload. Multi-AZ should be carried out, In the case where cost is a challenge, then resources can be right-sized to have two copies at a fair price.&lt;/p&gt;

</description>
      <category>discuss</category>
    </item>
    <item>
      <title>test post</title>
      <dc:creator>Ogbeide Godstime Osemenkhian</dc:creator>
      <pubDate>Fri, 30 Dec 2022 22:41:55 +0000</pubDate>
      <link>https://dev.to/gtogbes/test-post-5560</link>
      <guid>https://dev.to/gtogbes/test-post-5560</guid>
      <description></description>
      <category>vite</category>
      <category>webdev</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
