<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mubarak Alhazan</title>
    <description>The latest articles on DEV Community by Mubarak Alhazan (@poly4).</description>
    <link>https://dev.to/poly4</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/poly4"/>
    <language>en</language>
    <item>
      <title>How to Import Existing AWS Resources into the Serverless Framework (Using CloudFormation Import)</title>
      <dc:creator>Mubarak Alhazan</dc:creator>
      <pubDate>Sun, 29 Mar 2026 14:54:07 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-import-existing-aws-resources-into-the-serverless-framework-using-cloudformation-import-3l7i</link>
      <guid>https://dev.to/aws-builders/how-to-import-existing-aws-resources-into-the-serverless-framework-using-cloudformation-import-3l7i</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;While working on a project to fully transition an application into Infrastructure as Code (IaC), I encountered a challenge that proved more difficult than I expected: &lt;strong&gt;importing existing AWS resources into the Serverless Framework&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The application already had several AWS resources created manually over time, including multiple &lt;strong&gt;SQS queues&lt;/strong&gt; used by different services. Since our goal was to have all infrastructure managed through IaC, we needed those existing resources to be managed directly inside our &lt;code&gt;serverless.yml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;At first thought, this seemed straightforward. However, I discovered that &lt;strong&gt;Serverless Framework does not provide a direct way to import existing resources&lt;/strong&gt; into a stack.&lt;/p&gt;

&lt;p&gt;Since Serverless deployments are built on top of &lt;strong&gt;AWS CloudFormation&lt;/strong&gt;, the only reliable way to achieve this is by using &lt;strong&gt;CloudFormation’s resource import functionality&lt;/strong&gt;. Even then, the process can be tricky, and if done incorrectly, it can cause &lt;strong&gt;stack drift or even delete existing resources&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Because I couldn't find a clear, practical guide that walks through this process from start to finish, I decided to document the approach that worked for us.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding The Challenge
&lt;/h2&gt;

&lt;p&gt;The Serverless Framework uses &lt;strong&gt;AWS CloudFormation&lt;/strong&gt; under the hood to create and manage infrastructure. When you deploy a Serverless application, the resources defined in &lt;code&gt;serverless.yml&lt;/code&gt; are translated into a CloudFormation template and applied to a stack.&lt;/p&gt;

&lt;p&gt;This workflow works well when CloudFormation is responsible for creating the resources from the beginning. However, the situation becomes more complicated when the resources already exist.&lt;/p&gt;

&lt;p&gt;In many real-world projects, infrastructure evolves. Some resources may have been created manually through the AWS console, while others may have been created by different deployment tools. As a result, these resources exist outside of the CloudFormation stack managed by Serverless.&lt;/p&gt;

&lt;p&gt;Serverless Framework cannot manage these resources because CloudFormation isn't aware of them.&lt;/p&gt;

&lt;p&gt;To bring these resources under IaC management, they must first be &lt;strong&gt;imported into the CloudFormation stack&lt;/strong&gt; that Serverless controls.&lt;/p&gt;

&lt;p&gt;CloudFormation does provide a &lt;strong&gt;resource import feature&lt;/strong&gt;, but it has strict requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The resource configuration in the template must match the existing resource&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A valid &lt;strong&gt;resource identifier&lt;/strong&gt; must be provided&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No unrelated stack changes can occur during the import&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In our case, the goal was to import several existing &lt;strong&gt;SQS queues&lt;/strong&gt; into our Serverless stack so they could be fully managed through &lt;code&gt;serverless.yml&lt;/code&gt; without recreating or disrupting the existing resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Approach that Worked
&lt;/h2&gt;

&lt;p&gt;After experimenting with different options, the most reliable approach we found was to use &lt;strong&gt;AWS CloudFormation’s Infrastructure as Code (IaC) Generator&lt;/strong&gt; to scan existing resources and generate a template that could be safely imported into our Serverless stack.&lt;/p&gt;

&lt;p&gt;This allowed us to import the resources &lt;strong&gt;without recreating them&lt;/strong&gt;, while ensuring the generated template accurately reflected the existing configuration.&lt;/p&gt;

&lt;p&gt;The process involved four main steps.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;Existing AWS Resources&lt;/span&gt;
        &lt;span class="s"&gt;↓&lt;/span&gt;
&lt;span class="s"&gt;IaC Generator Scan&lt;/span&gt;
        &lt;span class="s"&gt;↓&lt;/span&gt;
&lt;span class="s"&gt;Template Generation&lt;/span&gt;
        &lt;span class="s"&gt;↓&lt;/span&gt;
&lt;span class="s"&gt;CloudFormation Import&lt;/span&gt;
        &lt;span class="s"&gt;↓&lt;/span&gt;
&lt;span class="s"&gt;Managed in Serverless&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 1: Scan The Existing Resources
&lt;/h3&gt;

&lt;p&gt;The first step is to scan the AWS account for the resources you want to import.&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;CloudFormation console&lt;/strong&gt;, navigate to &lt;strong&gt;IaC Generator&lt;/strong&gt; and click &lt;strong&gt;Scan&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You’ll be presented with two options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scan all resources&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scan specific resources&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since we only needed to import &lt;strong&gt;SQS queues&lt;/strong&gt;, we selected &lt;strong&gt;Scan specific resources&lt;/strong&gt; and chose the resource type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;AWS::SQS::Queue&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CloudFormation then scans the account and identifies all SQS queues that exist in the account.&lt;/p&gt;

&lt;p&gt;This scan allows AWS to capture the configuration of those resources so they can be represented in a CloudFormation template.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06lq9wn2lnxrcshlpqxv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06lq9wn2lnxrcshlpqxv.png" alt="Selecting AWS::SQS::Queue when scanning resources using the CloudFormation IaC Generator." width="800" height="322"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Selecting AWS::SQS::Queue when scanning resources using the CloudFormation IaC Generator&lt;/em&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Generate a Template for the Existing Stack
&lt;/h3&gt;

&lt;p&gt;Once the scan is completed, CloudFormation displays a success message and prompts you to &lt;strong&gt;create a template&lt;/strong&gt; from the scanned resources.&lt;/p&gt;

&lt;p&gt;Click &lt;strong&gt;Create Template&lt;/strong&gt;, and you will be given two options:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start a new template&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Update a template for an existing stack&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since we wanted the resources to be managed by our existing Serverless stack, we selected:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Update a template for an existing stack&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Next, you’ll be asked to choose the &lt;strong&gt;stack you want to update&lt;/strong&gt;. In our case, this was the stack created by our Serverless deployment.&lt;/p&gt;

&lt;p&gt;After selecting the stack, CloudFormation asks for a few template details:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Template Name&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A name for the generated template.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deletion Policy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Defines what happens to the resource if it is removed from the stack.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DeletionPolicy: Retain
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures the resource &lt;strong&gt;is not deleted&lt;/strong&gt; if it is later removed from the CloudFormation stack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update Replace Policy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Defines what happens if CloudFormation needs to replace the resource during an update.&lt;/p&gt;

&lt;p&gt;Using &lt;code&gt;Retain&lt;/code&gt; here also helps prevent accidental deletion.&lt;/p&gt;

&lt;p&gt;After providing these details, CloudFormation shows the list of resources that were discovered during the scan.&lt;/p&gt;

&lt;p&gt;At this stage:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Resources &lt;strong&gt;already managed by a stack&lt;/strong&gt; cannot be selected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resources &lt;strong&gt;not yet managed by any stack&lt;/strong&gt; can be selected for import.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can then choose the specific resources you want to add to the existing stack.&lt;/p&gt;

&lt;p&gt;In the next step, CloudFormation checks if the selected resources require &lt;strong&gt;related resources&lt;/strong&gt;. Some AWS services require dependent resources to be imported together, but in the case of SQS queues, there were no additional dependencies required.&lt;/p&gt;

&lt;p&gt;Finally, you are given a chance to &lt;strong&gt;review the generated template&lt;/strong&gt; before creating it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4n9a1kpq2u3nfv0y0yty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4n9a1kpq2u3nfv0y0yty.png" alt="Choosing to update the template for an existing stack." width="800" height="300"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Choosing to update the template for an existing stack.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6ankmlz5naksy9cnrb6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6ankmlz5naksy9cnrb6.png" alt="Selecting the SQS queues to add to the existing stack." width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Selecting the SQS queues to add to the existing stack.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Import the Resources
&lt;/h3&gt;

&lt;p&gt;After the template is created, CloudFormation allows you to &lt;strong&gt;import the resources directly into the selected stack&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Because the template was generated from the scanned resources and tied to the existing stack, the import process becomes much smoother.&lt;/p&gt;

&lt;p&gt;There is no need to manually create a change set or define logical resource IDs. The IaC Generator already handles this as part of the template generation process.&lt;/p&gt;

&lt;p&gt;Once the import is executed successfully, the resources appear inside the &lt;strong&gt;CloudFormation Resources tab&lt;/strong&gt; for the stack.&lt;/p&gt;

&lt;p&gt;At this point, the imported resources become &lt;strong&gt;fully managed by the stack&lt;/strong&gt;, which means future deployments can safely reference them from &lt;code&gt;serverless.yml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5dlygifkv29l6xu0x0e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn5dlygifkv29l6xu0x0e.png" alt="The imported SQS queue now appears as a managed resource in the CloudFormation stack" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The imported SQS queue now appears as a managed resource in the CloudFormation stack.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Define the Resources in &lt;code&gt;serverless.yml&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;Once the resources are imported successfully, the next step is to define them inside &lt;code&gt;serverless.yml&lt;/code&gt; so that future deployments continue to manage them correctly.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;SQSQueueDemoimportqueue&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::SQS::Queue&lt;/span&gt;
      &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;QueueName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo-import-queue&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To keep the configuration clean and reusable across environments, we defined queue names inside the &lt;code&gt;custom&lt;/code&gt; block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;custom&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;queue_suffix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;dev&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
    &lt;span class="na"&gt;staging&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;_staging"&lt;/span&gt;
    &lt;span class="na"&gt;prod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;_prod"&lt;/span&gt;

  &lt;span class="na"&gt;queues&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;email_queue&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;demo-queue${self:custom.queue_suffix.${sls:stage}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach allows the same resource definition to work across multiple environments while keeping naming consistent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Things to Watch Out For When Importing Resources
&lt;/h2&gt;

&lt;p&gt;While the import process is fairly straightforward when using the IaC Generator, there are a few things to keep in mind.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resources Already Managed by Another Stack&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If a resource is already associated with a CloudFormation stack, it cannot be imported into another stack. During the template creation step, CloudFormation will indicate which resources are already managed and prevent them from being selected.&lt;/p&gt;

&lt;p&gt;If you need to move a resource between stacks, ensure the original stack removes the resource safely before attempting the import.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use a Safe Deletion Policy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When generating the template, it is recommended to set:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;DeletionPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Retain&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prevents the resource from being deleted if it is later removed from the stack.&lt;/p&gt;

&lt;p&gt;For stateful resources like SQS queues, this provides an additional safety layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ensure the Resource Configuration Matches&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;CloudFormation imports require the template configuration to match the existing resource configuration.&lt;/p&gt;

&lt;p&gt;Using the IaC Generator helps avoid this issue since it generates the template directly from the existing resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Importing existing AWS resources into the Serverless Framework isn’t always straightforward because Serverless relies on &lt;strong&gt;CloudFormation&lt;/strong&gt; to manage infrastructure. This means that existing resources must first be imported into the underlying CloudFormation stack before they can be managed through &lt;code&gt;serverless.yml&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;By using the &lt;strong&gt;IaC Generator&lt;/strong&gt; to scan resources and generate a template for an existing stack, we were able to safely import our SQS queues without recreating them. Once imported, defining them in &lt;code&gt;serverless.yml&lt;/code&gt; allows Serverless to manage them as part of future deployments.&lt;/p&gt;

&lt;p&gt;Although the process requires a few careful steps, it provides a reliable way to bring existing infrastructure under Infrastructure as Code.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Webflow CMS Behind CloudFront: Serving Dynamic and Static Pages on the Same Domain</title>
      <dc:creator>Mubarak Alhazan</dc:creator>
      <pubDate>Sun, 18 Jan 2026 16:12:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/webflow-cms-behind-cloudfront-serving-dynamic-and-static-pages-on-the-same-domain-2k7m</link>
      <guid>https://dev.to/aws-builders/webflow-cms-behind-cloudfront-serving-dynamic-and-static-pages-on-the-same-domain-2k7m</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;A prevalent way to deploy a frontend on AWS is to use &lt;strong&gt;S3, CloudFront, and Route 53&lt;/strong&gt;. It’s simple and cost-effective. Many teams begin here and successfully run this setup for years.&lt;/p&gt;

&lt;p&gt;It’s also very common for teams to eventually move some &lt;strong&gt;public-facing pages&lt;/strong&gt; (such as the homepage or blog) to a CMS like &lt;strong&gt;Webflow&lt;/strong&gt;. This shift is usually driven by practical reasons: marketers want to publish content without engineering involvement, SEO workflows become easier, and design iterations move faster.&lt;/p&gt;

&lt;p&gt;The challenge starts when you want to do &lt;strong&gt;both&lt;/strong&gt;. You might want:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="http://company.com" rel="noopener noreferrer"&gt;&lt;code&gt;company.com&lt;/code&gt;&lt;/a&gt; and &lt;code&gt;/blogs&lt;/code&gt; to be managed in Webflow&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Other pages (especially dynamic ones like &lt;code&gt;/dashboard/*&lt;/code&gt;, to remain on S3).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Everything is to live under the &lt;strong&gt;same domain&lt;/strong&gt;, without redirects or subdomains.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, the problem is &lt;strong&gt;traffic routing&lt;/strong&gt;. In this article, I’ll walk through how I solved this problem in a recent deployment.&lt;/p&gt;

&lt;p&gt;The approach allows you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Keep &lt;strong&gt;CloudFront as the single entry point&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Serve Webflow pages and S3-backed pages from the &lt;strong&gt;same apex domain&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Handle Webflow domain verification without permanently routing traffic away from CloudFront&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This setup works reliably in production and avoids several pitfalls that aren’t clearly documented elsewhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture That Works
&lt;/h2&gt;

&lt;p&gt;At a high level, the solution is surprisingly simple once you adopt the right mental model.&lt;/p&gt;

&lt;p&gt;Instead of trying to move routing logic into Webflow (which might require an enterprise plan) or introducing a new proxy layer, we keep &lt;strong&gt;CloudFront as the single entry point&lt;/strong&gt; for the domain and let it do what it’s already designed to do: &lt;strong&gt;route requests to different origins based on path patterns&lt;/strong&gt;. This avoids extra proxy layer infrastructure and builds directly on an architecture many teams already have in production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Core Idea is:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;CloudFront remains the single edge&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Webflow and S3 are treated as origins&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Path-based behaviours decide where each request goes&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Requests flow like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Route53 → CloudFront
        ├── /dashboard/&lt;span class="k"&gt;*&lt;/span&gt; → S3
        └── /&lt;span class="k"&gt;*&lt;/span&gt;             → Webflow
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the browser’s perspective, everything is still coming from &lt;a href="http://company.com" rel="noopener noreferrer"&gt;&lt;code&gt;company.com&lt;/code&gt;&lt;/a&gt;. There are no redirects, no subdomains, and no visible boundary between Webflow-managed pages and S3-backed pages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Webflow as an Origin (and the Verification Gotcha)
&lt;/h2&gt;

&lt;p&gt;Treating Webflow as an origin in CloudFront is straightforward from an infrastructure perspective. You create a custom origin pointing to your Webflow-hosted site, just like you would for an ALB or any external service. However, if that’s all you do, you’ll likely run into a &lt;strong&gt;502 Bad Gateway&lt;/strong&gt; error on the paths meant to route to Webflow.&lt;/p&gt;

&lt;p&gt;This happens because &lt;a href="https://help.webflow.com/hc/en-us/articles/33961239562387-Manually-connect-a-custom-domain#how-to-connect-your-custom-domain" rel="noopener noreferrer"&gt;&lt;strong&gt;Webflow requires domain ownership&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;verification before it allows publishing&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The way Webflow handles this verification can be confusing. Before a domain can be published, Webflow asks you to add DNS records, typically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A &lt;strong&gt;TXT record&lt;/strong&gt; for ownership verification&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An &lt;strong&gt;A record or CNAME&lt;/strong&gt; that points traffic directly to Webflow&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This works well when Webflow is intended to be the final destination for traffic. In this architecture, however, &lt;strong&gt;DNS must point to CloudFront&lt;/strong&gt;, not Webflow.&lt;/p&gt;

&lt;p&gt;This creates a fundamental mismatch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Webflow expects DNS to point directly to them&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;CloudFront must stay in front to enable path-based routing&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key to resolving this is understanding that the Webflow UI is mixing two very different concerns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Verification is a control-plane requirement&lt;/strong&gt;: it exists to prove domain ownership and enable publishing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Traffic routing is a data-plane concern&lt;/strong&gt;: it determines where live requests are actually served from&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Webflow’s UI strongly suggests that both the TXT record and the A/CNAME record must continuously point to Webflow. In reality, &lt;strong&gt;only the TXT record is required for ongoing verification&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The practical workaround is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Initially, point the &lt;strong&gt;TXT, A, and CNAME&lt;/strong&gt; records to Webflow so verification can succeed&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once the domain is verified and publishing is enabled:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep the &lt;strong&gt;TXT&lt;/strong&gt; record in place for continuous ownership proof&lt;/li&gt;
&lt;li&gt;Point the &lt;strong&gt;A and CNAME&lt;/strong&gt; records back to CloudFront&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;At that point, CloudFront can safely forward traffic to Webflow as an origin, since the domain is already verified.&lt;/p&gt;

&lt;p&gt;In the next section, we’ll walk through how this is implemented step by step, without breaking production traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-Step Implementation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Keep DNS Pointing to CloudFront
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Ensure your apex domain (&lt;a href="http://company.com" rel="noopener noreferrer"&gt;&lt;code&gt;company.com&lt;/code&gt;&lt;/a&gt;) and &lt;code&gt;www&lt;/code&gt; (if used) point to &lt;strong&gt;CloudFront&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Do &lt;strong&gt;not&lt;/strong&gt; remove or bypass CloudFront as part of the final setup&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This remains true before and after verification.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Add Webflow as a Custom Origin in CloudFront
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create a new &lt;strong&gt;custom origin&lt;/strong&gt; in your CloudFront distribution&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Origin domain:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;your-site.webflow.io
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Protocol policy: HTTPS only&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Origin headers: none required&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this point, requests routed to Webflow will still fail until verification is completed.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Configure Path-Based Behaviours
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Default behaviour (&lt;code&gt;*&lt;/code&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Origin: &lt;strong&gt;Webflow&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Cache policy: CachingDisabled (recommended for CMS content)&lt;/li&gt;
&lt;li&gt;Origin request policy: AllViewer (or equivalent)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Specific behaviour (&lt;code&gt;/dashboard/*&lt;/code&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Origin: &lt;strong&gt;S3&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Cache policy: As per your existing setup&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Ordering matters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;More specific paths (&lt;code&gt;/dashboard/*&lt;/code&gt;) must appear above the default (&lt;code&gt;*&lt;/code&gt;) behavior.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Temporarily Point DNS to Webflow for Verification
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;In Webflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add your custom domain (&lt;a href="http://company.com" rel="noopener noreferrer"&gt;&lt;code&gt;company.com&lt;/code&gt;&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;In Route53:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add the &lt;strong&gt;TXT record&lt;/strong&gt; provided by Webflow (&lt;code&gt;_webflow&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Temporarily update:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apex A record → Webflow IP(s)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;www&lt;/code&gt; CNAME → &lt;a href="http://cdn.webflow.com" rel="noopener noreferrer"&gt;&lt;code&gt;cdn.webflow.com&lt;/code&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Wait for verification to complete in Webflow&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This step is temporary and only required once.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Revert DNS to CloudFront
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Restore DNS records:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apex A record → CloudFront&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;www&lt;/code&gt; CNAME → CloudFront&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Keep the &lt;strong&gt;TXT record&lt;/strong&gt; in place&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;At this point:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;DNS routes traffic to CloudFront&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Webflow remains verified and publishable&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. Publish the Site in Webflow
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Set the custom domain as the default&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Publish the site&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;No further DNS changes are required&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CloudFront now routes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;/*&lt;/code&gt; → Webflow&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;/restaurants/*&lt;/code&gt; → S3&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All under the same domain.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The key insight in this setup is simple: &lt;strong&gt;CloudFront remains the single edge&lt;/strong&gt;, and everything else becomes an origin behind it.&lt;/p&gt;

&lt;p&gt;Webflow does not need to own DNS long-term to serve content. It only needs to verify the domain once. After that, CloudFront can safely sit in front, handle routing, and forward requests to Webflow exactly like it would to any other backend.&lt;/p&gt;

&lt;p&gt;You may still see Webflow UI warnings like &lt;em&gt;“update required”&lt;/em&gt; after DNS is reverted to CloudFront. These warnings are harmless, as long as the domain is already verified and published.&lt;/p&gt;

&lt;p&gt;With this model, you get the flexibility of mixing static assets, dynamic frontends, and managed platforms like Webflow, while still retaining CloudFront.&lt;/p&gt;

&lt;p&gt;That’s the architecture that actually works.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Thank you for reading&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You can follow me on &lt;a href="https://www.linkedin.com/in/alhazan-mubarak/" rel="noopener noreferrer"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt; and subscribe to my &lt;a href="https://www.youtube.com/@poly4" rel="noopener noreferrer"&gt;&lt;strong&gt;YouTube Channel&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;,&lt;/strong&gt; where I share more valuable content. Also, let me know your thoughts in the comments section.&lt;/p&gt;

</description>
      <category>webflow</category>
      <category>cloudfront</category>
      <category>dns</category>
    </item>
    <item>
      <title>Cooking Without Burning: My DevOps Doings in the Past Few Years</title>
      <dc:creator>Mubarak Alhazan</dc:creator>
      <pubDate>Mon, 20 Oct 2025 08:49:29 +0000</pubDate>
      <link>https://dev.to/aws-builders/cooking-without-burning-my-devops-doings-in-the-past-few-years-34</link>
      <guid>https://dev.to/aws-builders/cooking-without-burning-my-devops-doings-in-the-past-few-years-34</guid>
      <description>&lt;p&gt;DevOps, much like cooking, is all about balance. Too little automation, and the process stays raw; too much change without control, and something catches fire. Over the past few years, I’ve spent countless hours in the kitchen, experimenting with tools, tweaking workflows, and sometimes cleaning up after the inevitable smoke. Each deployment, like a dish, taught me a lesson: precision, timing, and preparation make all the difference.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zckwbp3mejqsdnws3bk.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zckwbp3mejqsdnws3bk.webp" alt="Cooking" width="512" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  From Code to Cloud
&lt;/h2&gt;

&lt;p&gt;My journey into DevOps began after attending &lt;strong&gt;OSCAFEST 2022&lt;/strong&gt;, where I attended multiple sessions on DevOps and cloud-native technologies. Listening to the speakers talk about automation, scalability, and continuous delivery opened up a new world for me. I became fascinated by what happens beyond the code: how software is deployed, monitored, and kept running smoothly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwy0ebhp3ts4fryakjra.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwy0ebhp3ts4fryakjra.jpg" alt="L-R: Michael Balli, Nader Dabit, Paul Ibeabuchi and I at OSCAFEST 2022" width="600" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;L-R: Michael Balli, Nader Dabit, Paul Ibeabuchi and I at OSCAFEST 2022&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;At the time, I was a frontend engineer. Inspired by what I learned at OSCAFEST, I decided to take the first step by deploying the frontend applications I built at work. That experience sparked a deeper interest in understanding the full deployment process, and soon I found myself exploring other aspects of DevOps.&lt;/p&gt;

&lt;p&gt;Since then, I’ve worked on multiple DevOps projects that have shaped my perspective on software delivery and reliability. Each project presented unique challenges, ranging from automating complex AWS infrastructures to managing deployments on bare servers.&lt;/p&gt;

&lt;p&gt;In this article, I’m shining the spotlight on some of the most interesting projects that tested my problem-solving skills, deepened my understanding of infrastructure, and helped me appreciate the craft of building systems that work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating AWS with Terraform
&lt;/h2&gt;

&lt;p&gt;At one time, our AWS infrastructure at my current company was managed entirely through the console: VPCs, EC2, load balancers, security groups, etc, were all created and updated manually. It worked, but it was messy. There were inconsistencies across environments, occasional missing configurations, and the constant risk of someone making a change in production that wasn’t mirrored elsewhere.&lt;/p&gt;

&lt;p&gt;That pain led us to adopt Terraform for Infrastructure as Code. The first step was to replicate our existing setup so we could version and reproduce it easily. Afterwards, I began organising it into modular components such as networking, frontend and backend services, each reusable across environments.&lt;/p&gt;

&lt;p&gt;This modular approach completely changed how we handled deployments. Instead of manually provisioning resources, we could spin up or tear down entire environments with a single command. It eliminated configuration drift and brought consistency across development, staging, and production environments.&lt;/p&gt;

&lt;p&gt;To make testing faster and safer, I integrated LocalStack, a local AWS emulator. This allowed us to validate Terraform changes and experiment confidently before applying them to live resources.&lt;/p&gt;

&lt;p&gt;The result was a leaner, more predictable workflow that saved time, reduced human error, and gave us consistent, reproducible environments across the board.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying on Bare Server
&lt;/h2&gt;

&lt;p&gt;When I joined this particular team, the company was facing a tough challenge: skyrocketing cloud costs. The dollar-to-naira exchange rate had become a major burden, and even after applying several AWS cost-optimisation strategies, we still weren’t hitting the company’s cost targets. That reality pushed us to make a bold decision to move away from AWS and deploy on a local cloud provider that billed in naira. It meant giving up the scalability and managed services AWS offered, but because our business operated in a B2B model with predictable growth, the trade-off was viable.&lt;/p&gt;

&lt;p&gt;With only raw servers available, I had to design a production-ready deployment that was secure, automated, and maintainable. We ran two main layers: the backend and the database, each on separate servers.&lt;/p&gt;

&lt;p&gt;For the &lt;strong&gt;backend layer&lt;/strong&gt;, I containerised all services using Docker to ensure consistency and easier updates. I configured &lt;strong&gt;Nginx&lt;/strong&gt; as a reverse proxy to route traffic across the microservices and set up &lt;strong&gt;SSL using Let’s Encrypt&lt;/strong&gt;, which provided free certificate issuance and automatic renewal (free is important, given our cost-saving goals). You can read the detailed SSL implementation in &lt;a href="https://poly4.hashnode.dev/securing-your-nginx-container-with-lets-encrypt-ssl-certificates" rel="noopener noreferrer"&gt;this article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;To &lt;strong&gt;automate deployments&lt;/strong&gt;, I created a GitHub Actions pipeline that built Docker images, pushed them to private &lt;strong&gt;Amazon ECR&lt;/strong&gt; (which was practically free for our usage), and redeployed them on the server whenever a new release was made. I document the complete workflow in this &lt;a href="https://poly4.hashnode.dev/automate-docker-deployments-to-your-server-using-github-actions-and-amazon-ecr" rel="noopener noreferrer"&gt;article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Of course, deployment alone wasn’t enough. We needed &lt;strong&gt;monitoring&lt;/strong&gt;, something AWS CloudWatch had previously handled for us. This time, I manually set up &lt;strong&gt;Prometheus&lt;/strong&gt; to track database performance, server metrics, and resource utilisation. The metrics were visualised in &lt;strong&gt;Grafana dashboards&lt;/strong&gt;, and I configured alerts to trigger Slack and email notifications when thresholds were breached.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;database reliability&lt;/strong&gt;, we used &lt;strong&gt;Acronis&lt;/strong&gt; for daily backups. This required installing backup agents on the database server and syncing data to the Acronis dashboard.&lt;/p&gt;

&lt;p&gt;On the &lt;strong&gt;security side&lt;/strong&gt;, I implemented &lt;strong&gt;least-privilege principles&lt;/strong&gt; at both the security group and server firewall levels. This ensured that access to the app and database servers was tightly controlled and auditable.&lt;/p&gt;

&lt;p&gt;In the end, we were able to &lt;strong&gt;cut infrastructure costs by more than half&lt;/strong&gt;, with the added advantage of paying locally in naira, protecting the business from foreign exchange volatility.&lt;/p&gt;

&lt;p&gt;More importantly, the experience reminded me that &lt;strong&gt;no solution fits all contexts&lt;/strong&gt;. AWS is usually my go-to platform because of its maturity and range of services, but this project forced me to look in a different direction. It was like a cook realising that not every dish needs the same spice; sometimes you need to reach for something unexpected to get the right flavour 😅. This project was my &lt;strong&gt;ghetto DevOps&lt;/strong&gt; moment; it was hands-on, challenging, but full of learning.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If you’re a Nigerian business looking for a cloud provider that bills in naira, I’d genuinely recommend &lt;a href="https://nobus.io/" rel="noopener noreferrer"&gt;&lt;strong&gt;Nobus&lt;/strong&gt;&lt;/a&gt;; their support team is excellent&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Migrating Infrastructure Across AWS Accounts
&lt;/h2&gt;

&lt;p&gt;While working at a consultancy firm, I was assigned to a project that required moving a client’s entire cloud deployment from our company’s AWS account to the client’s own AWS account, all with &lt;strong&gt;minimal downtime&lt;/strong&gt;. The migration was part of a new contract. The main challenge was that much of the infrastructure hadn’t been fully codified with Infrastructure as Code (IaC), which meant every migration step had to be carefully planned and executed.&lt;/p&gt;

&lt;p&gt;We began with the &lt;strong&gt;database layer&lt;/strong&gt;. The client’s data was stored in &lt;strong&gt;DynamoDB&lt;/strong&gt;, and we decided to use the &lt;strong&gt;S3 export-import method&lt;/strong&gt; for the migration. This approach was cost-effective and efficient for the dataset size we were dealing with. To avoid disrupting the live environment, we scheduled the migration &lt;strong&gt;outside active business hours&lt;/strong&gt;, and the entire process was completed smoothly.&lt;/p&gt;

&lt;p&gt;Next was the &lt;strong&gt;backend layer&lt;/strong&gt;, which ran on &lt;strong&gt;AWS Lambda&lt;/strong&gt;. For this, we wrote a &lt;strong&gt;Python script using Boto3&lt;/strong&gt; to automate copying function configurations and code from the source account to the destination account.&lt;/p&gt;

&lt;p&gt;Then came the &lt;strong&gt;frontend migration&lt;/strong&gt;, which turned out to be the most challenging part of the entire process. The frontend stack combined &lt;strong&gt;S3 (for hosting), CloudFront (for distribution), and Route 53 (for DNS management)&lt;/strong&gt;, but I couldn’t find a clear, end-to-end guide on migrating this exact stack. So, I had to piece together best practices from multiple AWS resources, carefully sequencing the migration of S3 buckets, CloudFront distributions, and DNS records to prevent service interruption. When the migration was finally complete, I documented the entire process in an article so the next person would find it easier. You can read that detailed walkthrough &lt;a href="https://poly4.hashnode.dev/migrating-frontend-deployment-across-aws-accounts" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documenting the DevOps Process
&lt;/h2&gt;

&lt;p&gt;Across every team I’ve worked with, one thing that has remained consistent is my commitment to &lt;strong&gt;documentation&lt;/strong&gt;. While many engineers see documentation as an afterthought, I’ve always treated it as a core part of engineering. It is a way to make complex systems understandable and sustainable. Over time, I’ve become known as the person who ensures things are written down, organised, and easy to follow.&lt;/p&gt;

&lt;p&gt;My motivation has always been &lt;strong&gt;easy onboarding, knowledge sharing, and reducing dependency on any single engineer&lt;/strong&gt;. I’ve seen how teams can slow down or lose context when crucial setup steps or troubleshooting processes live only in someone’s head. Good documentation turns individual know-how into collective knowledge.&lt;/p&gt;

&lt;p&gt;My approach varies based on the type of content. For technical references that evolve frequently, such as configuration steps, I prefer &lt;strong&gt;GitHub README files&lt;/strong&gt;, where updates can easily follow version control. For broader, long-form guides like deployment workflows, architecture decisions, or troubleshooting procedures, I use &lt;strong&gt;Confluence&lt;/strong&gt;, which provides better structure and discoverability for team-wide access.&lt;/p&gt;

&lt;p&gt;Documentation is something I do &lt;strong&gt;for myself and for others&lt;/strong&gt;. It helps me think clearly, ensures the next person can build faster, and makes sure that when systems scale, the knowledge behind them scales too.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reflections: Growth Beyond the Pipeline
&lt;/h2&gt;

&lt;p&gt;Looking back at these projects, I see more than just deployments, configurations, or scripts; I see growth. Each challenge pushed me to think beyond technical correctness and focus on building systems that serve real business needs; solutions that are resilient, cost-conscious, and adaptable to change.&lt;/p&gt;

&lt;p&gt;If there’s one lesson I’ve learned and want to leave you with, it’s &lt;strong&gt;&lt;em&gt;that DevOps isn’t about fancy tools; it’s about making sure the kitchen runs smoothly even when no one’s watching the stove&lt;/em&gt;&lt;/strong&gt;*.*&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Thank you for Reading&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You can follow me on &lt;a href="https://www.linkedin.com/in/alhazan-mubarak/" rel="noopener noreferrer"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt; and subscribe to my &lt;a href="https://www.youtube.com/@poly4" rel="noopener noreferrer"&gt;&lt;strong&gt;YouTube Channel&lt;/strong&gt;&lt;/a&gt;, where I share more valuable content.&lt;/p&gt;

&lt;p&gt;What’s a project you’re most proud of or learned the most from? I’d love to hear from you in the comments.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>career</category>
    </item>
    <item>
      <title>Building a Subtitle Service for Your App Using AWS Transcribe</title>
      <dc:creator>Mubarak Alhazan</dc:creator>
      <pubDate>Mon, 02 Jun 2025 11:20:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-a-subtitle-service-for-your-app-using-aws-transcribe-15h2</link>
      <guid>https://dev.to/aws-builders/building-a-subtitle-service-for-your-app-using-aws-transcribe-15h2</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Over the past two years, I've worked on three different video learning platforms. A recurring requirement across these projects has been &lt;strong&gt;subtitles,&lt;/strong&gt; regardless of the target audience, tech stack, or business goal. Subtitles are essential for improving accessibility, accommodating users in sound-sensitive environments, and enhancing comprehension.&lt;/p&gt;

&lt;p&gt;However, integrating subtitles at scale isn't as straightforward as toggling a switch. You need a system that can reliably handle transcription, process different media formats, and keep the architecture maintainable.&lt;/p&gt;

&lt;p&gt;In this article, I’ll walk you through a &lt;strong&gt;reusable subtitle service architecture&lt;/strong&gt; built using &lt;strong&gt;Amazon Transcribe&lt;/strong&gt;. By the end, you’ll know how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Automatically transcribe video/audio content using Amazon Transcribe&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Store and serve subtitle files securely from S3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build a reusable subtitle service you can plug into any of your applications&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Whether you're developing an e-learning app, a video streaming platform, or something in between, this approach will save you time and improve the user experience with minimal effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding AWS Transcribe and the Project Goal
&lt;/h2&gt;

&lt;p&gt;Amazon Transcribe is a fully managed automatic speech recognition (ASR) service that makes it easy to add real-time or batch transcription to your applications. It’s capable of converting speech from audio or video files into text, supporting a wide range of languages and input formats such as &lt;code&gt;.mp3&lt;/code&gt;, &lt;code&gt;.mp4&lt;/code&gt;, and more.&lt;/p&gt;

&lt;p&gt;Transcribe outputs its results in a structured &lt;code&gt;.json&lt;/code&gt; format, and with a little processing, you can convert this into common subtitle formats such as &lt;code&gt;.vtt&lt;/code&gt; or &lt;code&gt;.srt&lt;/code&gt;. This flexibility makes it a solid choice for building custom subtitle pipelines.&lt;/p&gt;

&lt;p&gt;The goal of this article is to build a &lt;strong&gt;Node.js-based subtitle service&lt;/strong&gt; that does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Accepts a video or audio file URL stored in Amazon S3&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uses AWS Transcribe to generate a transcript&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Converts the transcription into a &lt;code&gt;.vtt&lt;/code&gt; subtitle file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uploads the subtitle file back to S3 with public read access&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Returns a public URL for use in video players&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This modular flow ensures reusability, making it easy to integrate into any application that handles media playback&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up Your AWS Environment
&lt;/h2&gt;

&lt;p&gt;Before diving into code, you need to set up a few AWS resources. These are essential for securely storing your media files, triggering transcription jobs, and handling the resulting subtitle files.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;S3 Bucket Setup&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Create an S3 bucket where you’ll:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* Upload input media files (video/audio)

* Store output subtitle files (`.vtt` or `.srt`)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; The subtitle service we'll build will work even if your input media files are stored in a different bucket or even a different region. We’ll show this in a later section.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Suggested folder structure:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight css"&gt;&lt;code&gt;&lt;span class="nt"&gt;your-subtitle-service-bucket&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nt"&gt;inputs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="nt"&gt;my-video&lt;/span&gt;&lt;span class="nc"&gt;.mp4&lt;/span&gt;
&lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="nt"&gt;outputs&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
    &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="nt"&gt;my-video&lt;/span&gt;&lt;span class="nc"&gt;.vtt&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Bucket Policy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You’ll need to attach a bucket policy to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Allow &lt;strong&gt;Amazon Transcribe&lt;/strong&gt; to write subtitle files to your bucket&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optionally allow &lt;strong&gt;public read access&lt;/strong&gt; to subtitle files (or serve them via signed URLs)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Sample Bucket Policy:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"PublicReadGetObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3:GetObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::transcription-subtitles-files/*"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"AllowTranscribePutObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"transcribe.amazonaws.com"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3:PutObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::transcription-subtitles-files/*"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This policy permits Amazon Transcribe to write the output and optionally expose the files publicly (which you can adjust based on your use case).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CORS Configuration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you plan to use the subtitle files in a frontend app (e.g., loading them into an HTML5 &lt;code&gt;&amp;lt;video&amp;gt;&lt;/code&gt; tag), you’ll also need to enable &lt;strong&gt;CORS (Cross-Origin Resource Sharing)&lt;/strong&gt; on your bucket. Without this, the browser will block requests from your frontend.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample CORS Configuration:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"AllowedHeaders"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"AllowedMethods"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"HEAD"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GET"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"PUT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"POST"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DELETE"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"AllowedOrigins"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"ExposeHeaders"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"ETag"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration allows your frontend (from any domain) to fetch subtitle files without CORS errors. You can adjust &lt;code&gt;AllowedOrigins&lt;/code&gt; to restrict access to your app's domain for more control.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;IAM Role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You’ll also need an IAM role or user with the right permissions to start transcription jobs and access media in S3.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Required Permissions:&lt;/strong&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* `transcribe:StartTranscriptionJob`

* `transcribe:GetTranscriptionJob`

* `s3:GetObject` – to fetch media

* `s3:PutObject` – to store subtitle files


**Minimal IAM Policy Example:**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::your-subtitle-service-bucket/*"
    },
    {
      "Effect": "Allow",
      "Action": [
        "transcribe:StartTranscriptionJob",
        "transcribe:GetTranscriptionJob"
      ],
      "Resource": "*"
    }
  ]
}
```
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Attach this to the process that triggers transcription, such as a backend service, Lambda function, or container task.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Region Considerations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Transcribe is not available in all regions. For best results:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* Choose a supported region like `us-east-1`, `us-west-2`, or `eu-west-1`

* Ensure your S3 buckets and transcription jobs are in the **same region** when possible to reduce latency and avoid cross-region errors
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Writing the Transcription Service in Node.js
&lt;/h2&gt;

&lt;p&gt;Let’s now build the core logic of our subtitle service using Node.js. The service will accept a media file (hosted in any S3 bucket or region), transcribe it using Amazon Transcribe, convert the result to &lt;code&gt;.vtt&lt;/code&gt;, and upload it back to your configured bucket.&lt;/p&gt;

&lt;p&gt;We’ll start by looking at the complete code and then explain each part step-by-step.&lt;/p&gt;
&lt;h3&gt;
  
  
  &lt;strong&gt;Full Code:&lt;/strong&gt; &lt;code&gt;transcriptionService.mjs&lt;/code&gt;
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// transcriptionService.mjs&lt;/span&gt;

&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;TranscribeClient&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;StartTranscriptionJobCommand&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;GetTranscriptionJobCommand&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-sdk/client-transcribe&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;S3Client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;PutObjectCommand&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;CopyObjectCommand&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;DeleteObjectCommand&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@aws-sdk/client-s3&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;axios&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;vttConvert&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws-transcription-to-vtt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;v4&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;uuidv4&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;uuid&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;REGION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;us-west-1&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Change to your AWS Region&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;S3_BUCKET&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;transcription-subtitles-files&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Bucket for storing VTT files and temporary videos&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;S3_BASE_URL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`https://&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;S3_BUCKET&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.s3.&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;REGION&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.amazonaws.com`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;transcribeClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TranscribeClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;REGION&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;s3Client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;S3Client&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;REGION&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;transcribeAndGenerateSubtitle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;videoS3Url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;mediaFileUri&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;videoS3Url&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;copiedKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;// Check if the video S3 URL matches our region bucket&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;hostname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;pathname&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;videoS3Url&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bucketRegion&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;hostname&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt; &lt;span class="c1"&gt;// extract region part like "us-east-1"&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bucketRegion&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;REGION&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`[i] Video is from different region (&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;bucketRegion&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;), copying to correct region...`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// Copy the object into our transcription bucket under transcribed-video/&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sourceBucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;hostname&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt; &lt;span class="c1"&gt;// get bucket name&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sourceKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;decodeURIComponent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;slice&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt; &lt;span class="c1"&gt;// remove leading "/"&lt;/span&gt;

    &lt;span class="nx"&gt;copiedKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`transcribed-video/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nf"&gt;uuidv4&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;sourceKey&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;pop&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="c1"&gt;// Create unique path&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;s3Client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CopyObjectCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
      &lt;span class="na"&gt;CopySource&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;sourceBucket&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;sourceKey&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;S3_BUCKET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;copiedKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}));&lt;/span&gt;

    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`[+] Copied video for transcription: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;copiedKey&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;mediaFileUri&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;S3_BASE_URL&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;copiedKey&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jobId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`transcription-job-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nf"&gt;uuidv4&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;startParams&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;TranscriptionJobName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;jobId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;LanguageCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;en-US&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;MediaFormat&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;mp4&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;Media&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;MediaFileUri&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;mediaFileUri&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;OutputBucketName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;S3_BUCKET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="c1"&gt;// Start transcription job&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;transcribeClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StartTranscriptionJobCommand&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;startParams&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`[+] Started transcription job: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;jobId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Polling transcription job status&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;completed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;transcriptFileUri&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;completed&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt; &lt;span class="c1"&gt;// wait 5 seconds&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;TranscriptionJob&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;transcribeClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
      &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;GetTranscriptionJobCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;TranscriptionJobName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;jobId&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;TranscriptionJob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TranscriptionJobStatus&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`[i] Job status: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;COMPLETED&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;completed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="nx"&gt;transcriptFileUri&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;TranscriptionJob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Transcript&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TranscriptFileUri&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;FAILED&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Transcription job failed: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;TranscriptionJob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;FailureReason&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="c1"&gt;// Download transcription JSON&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;transcriptFileUri&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;transcriptionJson&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;// Convert transcription to VTT&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;vttData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;vttConvert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;transcriptionJson&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Upload VTT file back to S3&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;vttKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`subtitles/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;jobId&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;.vtt`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;s3Client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PutObjectCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="na"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;S3_BUCKET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;vttKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;Body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;vttData&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;ContentType&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;text/vtt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;}));&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;vttUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;S3_BASE_URL&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;/&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;vttKey&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`[+] Subtitle uploaded: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;vttUrl&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// delete the copied video if we created one&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;copiedKey&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;s3Client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;DeleteObjectCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;Bucket&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;S3_BUCKET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;Key&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;copiedKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;}));&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`[i] Cleaned up temporary copied video: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;copiedKey&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;cleanupError&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;warn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`[!] Failed to delete temporary video (safe to ignore):`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;cleanupError&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;vttUrl&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;transcribeAndGenerateSubtitle&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Code Walkthrough&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Setup AWS Clients&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;transcribeClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TranscribeClient&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;REGION&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;s3Client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;S3Client&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;REGION&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Handle Cross-Region Video Files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the input video is in a different region, we copy it into our main S3 bucket to avoid region mismatch errors.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;hostname&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;pathname&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;URL&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;videoS3Url&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;bucketRegion&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;hostname&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt; &lt;span class="c1"&gt;// extract region part like "us-east-1"&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;bucketRegion&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;REGION&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Copy logic here...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start Transcription Job&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We generate a unique job ID, configure the job with language and format, and send the request.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;jobId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`transcription-job-&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nf"&gt;uuidv4&lt;/span&gt;&lt;span class="p"&gt;()}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;startParams&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;transcribeClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;StartTranscriptionJobCommand&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;startParams&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Poll Until Completion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We periodically check the job status every 5 seconds until it completes or fails.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;completed&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;r&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;5000&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt; &lt;span class="c1"&gt;// wait 5 seconds&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;TranscriptionJob&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;transcribeClient&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(...);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;TranscriptionJob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TranscriptionJobStatus&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;COMPLETED&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;completed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="nx"&gt;transcriptFileUri&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;TranscriptionJob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Transcript&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TranscriptFileUri&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;status&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;FAILED&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;`Transcription job failed: &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;TranscriptionJob&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;FailureReason&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Convert to&lt;/strong&gt; &lt;code&gt;.vtt&lt;/code&gt; &lt;strong&gt;Format&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the transcription is ready, we fetch the JSON, convert it to VTT using the &lt;code&gt;aws-transcription-to-vtt&lt;/code&gt; package, and upload it to S3.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;vttData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;vttConvert&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;transcriptionJson&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;s3Client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;PutObjectCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Clean Up Temporary Files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If we copied the video earlier, we clean it up at the end.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;copiedKey&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;s3Client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;DeleteObjectCommand&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Design Choices That Make This Reusable&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Accepts any media URI&lt;/strong&gt;: Works across buckets and regions by copying to a known location, enabling use across teams or environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Job name namespacing&lt;/strong&gt;: Each job uses a unique identifier (&lt;code&gt;job-userId-timestamp&lt;/code&gt;) to prevent collisions, especially in high-concurrency environments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Formats and language are parameterized&lt;/strong&gt;: The design can easily support multilingual subtitles or alternate media types by tweaking just a few parameters.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Scoped resource access&lt;/strong&gt;: All outputs and temporary assets are stored in a predictable &lt;code&gt;subtitles/&lt;/code&gt; and &lt;code&gt;transcribed-video/&lt;/code&gt; S3 path, making organization and lifecycle management easier.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Idempotent and modular logic&lt;/strong&gt;: Small, composable functions enable wrapping into different execution models like Lambda functions, HTTP endpoints, or background queues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic cleanup of temporary files&lt;/strong&gt;: Temporary copies are deleted after use to minimize cost and clutter.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Usage and Integration in Applications
&lt;/h2&gt;

&lt;p&gt;Now that we have a reusable transcription service, let’s look at how to integrate it into real-world applications. This service is flexible enough to be used inside:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A backend API route (e.g., Express or Fastify)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A serverless Lambda function&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A CLI tool or job processor&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Sample Usage: Calling the Service&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;transcribeAndGenerateSubtitle&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./transcriptionService.mjs&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;videoUrl&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://some-bucket.s3.us-east-1.amazonaws.com/uploads/my-video.mp4&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;transcribeAndGenerateSubtitle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;videoUrl&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;subtitleUrl&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Subtitle available at:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;subtitleUrl&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
  &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;catch&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Failed to generate subtitles:&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;err&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the function completes, it returns a URL pointing to the &lt;code&gt;.vtt&lt;/code&gt; subtitle file stored in your configured S3 bucket. For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;https://transcription-subtitles-files.s3.us-west-1.amazonaws.com/subtitles/transcription-job-2cdbfd78-a3cc-47f3-a414-dcb27ac163c9.vtt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Using the Subtitle in HTML Video&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can plug the subtitle URL directly into an HTML &lt;code&gt;&amp;lt;video&amp;gt;&lt;/code&gt; tag using the &lt;code&gt;&amp;lt;track&amp;gt;&lt;/code&gt; element:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight xml"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;video&lt;/span&gt; &lt;span class="err"&gt;controls&lt;/span&gt; &lt;span class="na"&gt;crossorigin=&lt;/span&gt;&lt;span class="s"&gt;"anonymous"&lt;/span&gt; &lt;span class="na"&gt;width=&lt;/span&gt;&lt;span class="s"&gt;"640"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;source&lt;/span&gt; &lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"https://some-bucket.s3.amazonaws.com/uploads/my-video.mp4"&lt;/span&gt; &lt;span class="na"&gt;type=&lt;/span&gt;&lt;span class="s"&gt;"video/mp4"&lt;/span&gt; &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
  &lt;span class="nt"&gt;&amp;lt;track&lt;/span&gt;
    &lt;span class="na"&gt;src=&lt;/span&gt;&lt;span class="s"&gt;"https://transcription-subtitles-files.s3.us-west-1.amazonaws.com/subtitles/transcription-job-2cdbfd78-a3cc-47f3-a414-dcb27ac163c9.vtt"&lt;/span&gt;
    &lt;span class="na"&gt;kind=&lt;/span&gt;&lt;span class="s"&gt;"subtitles"&lt;/span&gt;
    &lt;span class="na"&gt;srclang=&lt;/span&gt;&lt;span class="s"&gt;"en"&lt;/span&gt;
    &lt;span class="na"&gt;label=&lt;/span&gt;&lt;span class="s"&gt;"English"&lt;/span&gt;
    &lt;span class="err"&gt;default&lt;/span&gt;
  &lt;span class="nt"&gt;/&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/video&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will show a “&lt;strong&gt;CC&lt;/strong&gt;” button that lets users toggle subtitles on/off in supported browsers, no extra libraries are required.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gotchas &amp;amp; Best Practices
&lt;/h2&gt;

&lt;p&gt;Before you ship this transcription service in production, here are a few key considerations and pitfalls to be aware of:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Region Mismatches&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Transcribe can only access videos in S3 buckets in the same region as the transcription job. As implemented in our code, if the input video is hosted in a different region, we automatically copy it to the bucket in the target region before starting the transcription job. This ensures compatibility without requiring upstream changes to the source video hosting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transcribe Limitations&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Transcribe has a few important limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Maximum audio/video length: &lt;strong&gt;4 hours&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Maximum file size: &lt;strong&gt;2 GB&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Files exceeding these limits will cause the job to fail.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tip: Validate media files ahead of time and reject or trim them before attempting transcription.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this guide, we built a complete transcription service that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Accepts any S3-hosted video file, even across regions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automatically transcribes it using Amazon Transcribe.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Converts the result into &lt;code&gt;.vtt&lt;/code&gt; subtitle format.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uploads the subtitle to a central S3 bucket for easy access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Cleans up temporary files to keep things tidy.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The core function &lt;code&gt;transcribeAndGenerateSubtitle(videoS3Url)&lt;/code&gt; handles the full lifecycle. It’s modular, serverless-friendly, and designed to plug into a web app, job queue, or CLI.&lt;/p&gt;

&lt;p&gt;You can also extend the solution in the following directions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multi-language support&lt;/strong&gt;: Pass &lt;code&gt;LanguageCode&lt;/code&gt; dynamically to support users in multiple locales.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Real-time transcription&lt;/strong&gt;: Consider integrating Amazon Transcribe Streaming for live captioning in calls or webinars.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find the complete code on GitHub: &lt;a href="https://github.com/poly4concept/transcription-service" rel="noopener noreferrer"&gt;https://github.com/poly4concept/transcription-service&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Thank you for reading&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;You can follow me on &lt;a href="https://www.linkedin.com/in/alhazan-mubarak/" rel="noopener noreferrer"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt; and subscribe to my &lt;a href="https://www.youtube.com/@poly4" rel="noopener noreferrer"&gt;&lt;strong&gt;YouTube Channel&lt;/strong&gt;&lt;/a&gt;&lt;strong&gt;,&lt;/strong&gt; where I share more valuable content. Also, Let me know your thoughts in the comment section&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Securing Your Nginx Container with Let's Encrypt SSL Certificates</title>
      <dc:creator>Mubarak Alhazan</dc:creator>
      <pubDate>Mon, 13 Jan 2025 06:00:17 +0000</pubDate>
      <link>https://dev.to/poly4/securing-your-nginx-container-with-lets-encrypt-ssl-certificates-4h17</link>
      <guid>https://dev.to/poly4/securing-your-nginx-container-with-lets-encrypt-ssl-certificates-4h17</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the modern web landscape, securing web applications with SSL has become a non-negotiable best practice. Secure Sockets Layer (SSL) encryption protects sensitive user data from being intercepted, enhances trust and improves search engine rankings.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nginx&lt;/strong&gt; is a popular choice for serving web traffic, especially in containerized environments. Running Nginx in a container allows for lightweight, scalable, and portable deployments, making it a go-to solution for many DevOps engineers and system administrators.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's Encrypt&lt;/strong&gt; is a free, and automated SSL certificate authority. With &lt;strong&gt;Let's Encrypt&lt;/strong&gt;, obtaining and renewing SSL certificates is streamlined, removing the financial and operational barriers that once deterred many from implementing robust encryption.&lt;/p&gt;

&lt;p&gt;This article will guide you through securing an Nginx server running inside a Docker container using &lt;strong&gt;Let's Encrypt&lt;/strong&gt; SSL certificates. By the end, you will have a solid understanding of how to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Configure your Nginx container for SSL.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Obtain and install &lt;strong&gt;Let's Encrypt&lt;/strong&gt; certificates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Automate certificate renewal for uninterrupted security.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before diving into securing your Nginx container with Let's Encrypt SSL certificates, ensure you meet the following prerequisites:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Familiarity with Docker and Nginx&lt;/strong&gt;: You should have a basic understanding of Docker's containerization concepts and be comfortable working with Nginx as a web server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;A Server with a Public Domain&lt;/strong&gt;: You'll need access to a server with a public IP address and a registered domain name. Additionally, you must have permission to modify your domain's DNS records to point it to your server.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Setting Up The Environment
&lt;/h2&gt;

&lt;p&gt;The first step in securing your Nginx container with Let's Encrypt SSL certificates is to prepare your environment. This involves installing Docker, setting up Nginx in a container, and configuring a directory structure for &lt;strong&gt;Let's Encrypt&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Install Docker and Set Up Nginx in a Container
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Install Docker:&lt;/strong&gt; Ensure Docker is installed on your server. If not, you can install it by following this official &lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;guide&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Pull the Nginx Docker Image&lt;/strong&gt;: Pull the latest Nginx image from Docker Hub&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull nginx:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Run the Nginx Container&lt;/strong&gt;: Launch an Nginx container, exposing port 80 for HTTP traffic and 443 for HTTPS traffic:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--name&lt;/span&gt; nginx-container &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &lt;span class="nt"&gt;-p&lt;/span&gt; 443:443 nginx
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Confirm the container is running:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker ps
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Step 2: Create a Directory Structure for the Let's Encrypt Webroot
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Let's Encrypt&lt;/strong&gt; uses a webroot for the HTTP-01 challenge to verify domain ownership. Create a directory to act as the webroot:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; yourdirectory/certbot/webroot/.well-known/acme-challenge
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This directory will hold the temporary files &lt;strong&gt;Let's Encrypt&lt;/strong&gt; uses during the certificate issuance process.&lt;/p&gt;

&lt;p&gt;Also, ensure that necessary permissions are granted to access the directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo chown&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;whoami&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;:&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;whoami&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; yourdirectory/certbot/webroot  
&lt;span class="nb"&gt;sudo chmod&lt;/span&gt; &lt;span class="nt"&gt;-R&lt;/span&gt; 755 yourdirectory/certbot/webroot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 3: Mount the Webroot to the Nginx Container
&lt;/h3&gt;

&lt;p&gt;To allow &lt;strong&gt;Let's Encrypt&lt;/strong&gt; to place challenge files in your webroot, mount the directory into your Nginx container. Stop the running container, then rerun it with the volume mounted:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Stop and remove the container:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stop nginx-container
docker &lt;span class="nb"&gt;rm &lt;/span&gt;nginx-container
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Run the container with the webroot mounted:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--name&lt;/span&gt; nginx-container &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;-v&lt;/span&gt; yourdirectory/certbot/webroot:/var/www/certbot &lt;span class="se"&gt;\&lt;/span&gt;
nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Test the setup by creating a test file in the webroot:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Let's Encrypt Webroot Test"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; yourdirectory/certbot/webroot/test.html
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Access the file in your browser:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;http://&amp;lt;your-domain&amp;gt;/test.html
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Once you see the test file content in your browser, your environment is correctly set up for &lt;strong&gt;Let's Encrypt&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Configuring Nginx for SSL
&lt;/h2&gt;

&lt;p&gt;Securing the Nginx server for Let's Encrypt SSL involves configuring a dedicated location block to handle ACME HTTP-01 challenges and ensuring the container reflects these changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Add a Dedicated Location Block for ACME Challenges
&lt;/h3&gt;

&lt;p&gt;Let's Encrypt uses the ACME HTTP-01 challenge to validate your domain ownership. For this to work, you'll need to configure a specific location block in your Nginx configuration to serve the challenges from the webroot.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt; &lt;span class="k"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/.well-known/acme-challenge/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
     &lt;span class="kn"&gt;root&lt;/span&gt; &lt;span class="n"&gt;/var/www/certbot&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
 &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 2: Update the Nginx Configuration
&lt;/h3&gt;

&lt;p&gt;To make the configuration process easily maintainable, you can create an &lt;code&gt;nginx.conf&lt;/code&gt; file in your local directory. The updated configuration file should include the location block defined above.&lt;/p&gt;

&lt;p&gt;Here’s an example of how your &lt;code&gt;nginx.conf&lt;/code&gt; file might look:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="k"&gt;events&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="k"&gt;http&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;# Include mime types&lt;/span&gt;
    &lt;span class="kn"&gt;include&lt;/span&gt;       &lt;span class="s"&gt;mime.types&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;default_type&lt;/span&gt;  &lt;span class="nc"&gt;application/octet-stream&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="c1"&gt;# Proxy settings&lt;/span&gt;
    &lt;span class="kn"&gt;sendfile&lt;/span&gt;        &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kn"&gt;keepalive_timeout&lt;/span&gt; &lt;span class="mi"&gt;65&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;#... other ngnix configuration&lt;/span&gt;

    &lt;span class="c1"&gt;# Server block for HTTPS&lt;/span&gt;
    &lt;span class="kn"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
        &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt; &lt;span class="s"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
        &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;yourdomain.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with your domain&lt;/span&gt;

        &lt;span class="c1"&gt;#... other server configuration&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;  

    &lt;span class="c1"&gt;# Server block for HTTP&lt;/span&gt;
    &lt;span class="kn"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;yourdomain.com&lt;/span&gt; &lt;span class="s"&gt;www.yourdomain.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/.well-known/acme-challenge/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;root&lt;/span&gt; &lt;span class="n"&gt;/var/www/certbot&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the file is ready, stop and remove the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stop nginx-container
docker &lt;span class="nb"&gt;rm &lt;/span&gt;nginx-container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then rerun the container with &lt;code&gt;nginx.conf&lt;/code&gt; file mounted to it as a volume:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--name&lt;/span&gt; nginx-container &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 443:443 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; yourdirectory/nginx.conf:/etc/nginx/nginx.conf &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; yourdirectory/certbot/webroot:/var/www/certbot &lt;span class="se"&gt;\&lt;/span&gt;
  nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Installing and Using Certbot
&lt;/h2&gt;

&lt;p&gt;Certbot is a widely used client for obtaining SSL certificates from &lt;strong&gt;Let's Encrypt&lt;/strong&gt;. In this section, we’ll install Certbot, generate an initial certificate, and configure Nginx to use it for secure communication.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Install Certbot on the Host Machine
&lt;/h3&gt;

&lt;p&gt;Install Certbot using your Linux distribution’s package manager.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;apt update
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;certbot
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also check the Certbot official &lt;a href="https://certbot.eff.org/instructions" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; and select your operating system for tailored guidance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Obtain the Initial SSL Certificate
&lt;/h3&gt;

&lt;p&gt;Run Certbot with the &lt;code&gt;webroot&lt;/code&gt; plugin, pointing to the directory mounted as the webroot in your Nginx container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;certbot certonly &lt;span class="nt"&gt;--webroot&lt;/span&gt; &lt;span class="nt"&gt;-w&lt;/span&gt; yourdirectory/certbot/webroot &lt;span class="nt"&gt;-d&lt;/span&gt; yourdomain.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once successful, Certbot will generate the certificate files, typically found in &lt;code&gt;etc/letsencrypt/live/yourdomain.com/&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Update the Nginx Configuration
&lt;/h3&gt;

&lt;p&gt;Once the certificate has been issued, update the &lt;code&gt;nginx.conf&lt;/code&gt; file to use SSL. Open the &lt;code&gt;nginx.conf&lt;/code&gt; file in your local directory and update the HTTPS server block with the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Server block for HTTPS&lt;/span&gt;
&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;443&lt;/span&gt; &lt;span class="s"&gt;ssl&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;yourdomain.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with your domain  &lt;/span&gt;

    &lt;span class="c1"&gt;# SSL Certificates  &lt;/span&gt;
    &lt;span class="kn"&gt;ssl_certificate&lt;/span&gt; &lt;span class="n"&gt;/etc/letsencrypt/live/yourdomain.com/fullchain.pem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
    &lt;span class="kn"&gt;ssl_certificate_key&lt;/span&gt; &lt;span class="n"&gt;/etc/letsencrypt/live/yourdomain.com/privkey.pem&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  

    &lt;span class="c1"&gt;# SSL Settings  &lt;/span&gt;
    &lt;span class="kn"&gt;ssl_protocols&lt;/span&gt; &lt;span class="s"&gt;TLSv1.2&lt;/span&gt; &lt;span class="s"&gt;TLSv1.3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
    &lt;span class="kn"&gt;ssl_prefer_server_ciphers&lt;/span&gt; &lt;span class="no"&gt;on&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
    &lt;span class="kn"&gt;ssl_ciphers&lt;/span&gt; &lt;span class="s"&gt;HIGH:!aNULL:!MD5&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  

    &lt;span class="c1"&gt;# Other https server configuration...  &lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To ensure all HTTP traffic is redirected to HTTPS, update the HTTP server block in the same &lt;code&gt;nginx.conf&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight nginx"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Server block for HTTP&lt;/span&gt;
&lt;span class="k"&gt;server&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
    &lt;span class="kn"&gt;listen&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
    &lt;span class="kn"&gt;server_name&lt;/span&gt; &lt;span class="s"&gt;yourdomain.com&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  

    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/.well-known/acme-challenge/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="kn"&gt;root&lt;/span&gt; &lt;span class="n"&gt;/var/www/certbot&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;# Redirect all HTTP traffic to HTTPS  &lt;/span&gt;
    &lt;span class="kn"&gt;location&lt;/span&gt; &lt;span class="n"&gt;/&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;  
        &lt;span class="kn"&gt;return&lt;/span&gt; &lt;span class="mi"&gt;301&lt;/span&gt; &lt;span class="s"&gt;https://&lt;/span&gt;&lt;span class="nv"&gt;$host$request_uri&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  
    &lt;span class="p"&gt;}&lt;/span&gt;  
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After updating the configuration file, stop and remove the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker stop nginx-container
docker &lt;span class="nb"&gt;rm &lt;/span&gt;nginx-container
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then rerun the container with an additional volume for the SSL certificates:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;--name&lt;/span&gt; nginx-container &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 80:80 &lt;span class="nt"&gt;-p&lt;/span&gt; 443:443 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; yourdirectory/nginx.conf:/etc/nginx/nginx.conf &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; /etc/letsencrypt:/etc/letsencrypt:ro &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; yourdirectory/certbot/webroot:/var/www/certbot &lt;span class="se"&gt;\&lt;/span&gt;
  nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Step 4: Verify SSL is Working
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open your domain in a browser and ensure it loads over HTTPS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Alternatively, use &lt;code&gt;curl&lt;/code&gt; to test the connection:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-I&lt;/span&gt; https://yourdomain.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If everything is set up correctly, you should see a response with a status code of &lt;code&gt;200&lt;/code&gt; and the protocol &lt;code&gt;HTTPS&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating Certificate Renewal
&lt;/h2&gt;

&lt;p&gt;SSL certificates issued by Let's Encrypt are valid for 90 days, so automating the renewal process ensures your web services remain secure without manual intervention. In this section, you’ll configure a cron job for certificate renewal, test it, and automate the Nginx container reload after a successful renewal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Set Up a Renewal Cron Job for Certbot
&lt;/h3&gt;

&lt;p&gt;To ensure certificates are renewed before expiration, you can configure a cron job. Open the cron configuration for editing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;crontab &lt;span class="nt"&gt;-e&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the following line to schedule a daily check for renewal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;0 0 &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="k"&gt;*&lt;/span&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;certbot renew &lt;span class="nt"&gt;--webroot&lt;/span&gt; &lt;span class="nt"&gt;-w&lt;/span&gt; yourdirectory/certbot/webroot &lt;span class="nt"&gt;--quiet&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; docker &lt;span class="nb"&gt;exec &lt;/span&gt;nginx-container nginx &lt;span class="nt"&gt;-s&lt;/span&gt; reload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here’s a breakdown of the command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;sudo certbot renew&lt;/code&gt;: Automatically renews certificates close to expiration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--webroot -w yourdirectory/certbot/webroot&lt;/code&gt;: Specifies the webroot directory for the renewal challenge.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;--quiet&lt;/code&gt;: Suppresses non-critical output for cleaner logs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;docker exec nginx-container-name nginx -s reload&lt;/code&gt;: Reload the Nginx container to apply the renewed certificate.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Test the Cron Job Manually
&lt;/h3&gt;

&lt;p&gt;Before relying on automation, test the command manually to ensure it works as expected:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;certbot renew &lt;span class="nt"&gt;--webroot&lt;/span&gt; &lt;span class="nt"&gt;-w&lt;/span&gt; yourdirectory/certbot/webroot &lt;span class="nt"&gt;--dry-run&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;--dry-run&lt;/code&gt; flag simulates the renewal process without actually renewing the certificates. Check for a success message.&lt;/p&gt;

&lt;p&gt;If successful, run the reload command manually to confirm Nginx applies the changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker &lt;span class="nb"&gt;exec &lt;/span&gt;nginx-container nginx &lt;span class="nt"&gt;-s&lt;/span&gt; reload
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If both commands are successful, you can be sure the Cron job will work effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Securing your Nginx server with Let's Encrypt SSL certificates enhances your web services’ security and ensures compliance with modern web standards. By leveraging Docker for containerized deployment and Certbot for automated SSL certificate management, you can achieve a streamlined, scalable, and secure web environment.&lt;/p&gt;

&lt;p&gt;The steps outlined in this guide—from setting up the environment to automating certificate renewals—are designed to simplify the process, even for those new to containerized web hosting. Implementing SSL through Let's Encrypt eliminates traditional barriers such as cost and complexity, making it accessible to developers and organizations of all sizes.&lt;/p&gt;

&lt;p&gt;With your Nginx container now secured, you can focus on other aspects of application development and deployment, confident that your web traffic is encrypted and protected.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thank You For Reading
&lt;/h2&gt;

&lt;p&gt;You can follow me on &lt;a href="https://www.linkedin.com/in/alhazan-mubarak/" rel="noopener noreferrer"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt; and subscribe to my &lt;a href="https://www.youtube.com/@poly4" rel="noopener noreferrer"&gt;YouTube Channel&lt;/a&gt;, where I share more valuable content. Also, Let me know your thoughts in the comment section.&lt;/p&gt;

&lt;p&gt;Happy Deploying 🚀&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automate Docker Deployments to Your Server Using GitHub Actions and Amazon ECR</title>
      <dc:creator>Mubarak Alhazan</dc:creator>
      <pubDate>Sun, 20 Oct 2024 18:22:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/automate-docker-deployments-to-your-server-using-github-actions-and-amazon-ecr-332d</link>
      <guid>https://dev.to/aws-builders/automate-docker-deployments-to-your-server-using-github-actions-and-amazon-ecr-332d</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wxjak51ayizoruawx0a.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9wxjak51ayizoruawx0a.gif" alt="Setup Architecture" width="1152" height="648"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In a recent project, I had to deploy a backend application on a server hosted on a local cloud platform. Cost-saving considerations drove the choice to use a local cloud, but it came with its limitations—particularly the absence of a built-in CI/CD service. I needed to implement a custom deployment process, and GitHub Actions emerged as the perfect solution to automate the workflow.&lt;/p&gt;

&lt;p&gt;I created a deployment strategy that leverages GitHub Actions to build the application's Docker image, push it to Amazon ECR (Elastic Container Registry), and then pull the image onto the server for execution. This approach offers several advantages. By handling the image build externally on ECR, I could reduce the resource load on the server, preventing excessive disk space and memory consumption. Additionally, using ECR allowed us to manage our images more effectively, making it easier to implement rollback policies in case a deployment failed.&lt;/p&gt;

&lt;p&gt;In this article, I'll walk you through creating a GitHub Action workflow to automate Docker deployments. I'll cover the steps to build an application image, push it to ECR, and deploy it to a server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;To follow along with this guide, you'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A GitHub repository where the deployment workflow will be set up, with access to add secrets for sensitive information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An AWS account and an IAM user with permission to interact with ECR.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A server where the application will be deployed, with SSH access configured.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A basic understanding of Docker and GitHub Actions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Amazon ECR Setup
&lt;/h2&gt;

&lt;p&gt;We'll use Amazon Elastic Container Registry (ECR) to store and manage Docker images for our deployment. Follow these steps to set up the ECR repository and configure access.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create a Private Image Repository&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Navigate to the Amazon ECR console. Create a new private image repository to store your Docker images. Leave the default settings, with the image tag set as mutable, which allows you to update images tagged with the same name.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Set Up a Lifecycle Policy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Over time, the ECR repository may accumulate a large number of images, which can take up storage space. To manage this, you can set up a lifecycle policy to automatically delete older or unneeded images. For example, I configured a rule to delete images older than 30 days.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5b157t5q0oxkirru6sf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw5b157t5q0oxkirru6sf.png" alt="ECR Lifecycle Policy" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure an IAM Role for GitHub OIDC Provider&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of storing long-term AWS credentials in GitHub Secrets, use GitHub’s OpenID Connect (OIDC) provider to grant access to AWS resources. Set up an IAM role that allows GitHub Actions to authenticate and perform ECR operations. You can find detailed instructions on setting up an OIDC role in this &lt;a href="https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services" rel="noopener noreferrer"&gt;article&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create an IAM User for AWS CLI Access on the Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In addition to the OIDC role, you need to create an IAM user to set up the AWS CLI on the server. This user should have the least privilege necessary, with permissions restricted to the specific ECR repository being used. Here’s an example of IAM policy to grant access only to the relevant repository:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"ecr:GetDownloadUrlForLayer"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"ecr:BatchGetImage"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"ecr:BatchCheckLayerAvailability"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"ecr:GetAuthorizationToken"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:ecr:&amp;lt;region&amp;gt;:&amp;lt;account-id&amp;gt;:repository/&amp;lt;repository-name&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This policy allows the user to pull images from the specified ECR repository, ensuring access is as limited as possible.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Server Configuration
&lt;/h2&gt;

&lt;p&gt;For this deployment, I set up a server on a local cloud platform, but these instructions will work for any server configuration. If you are using AWS, you can follow this &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html" rel="noopener noreferrer"&gt;guide&lt;/a&gt; to create an EC2 instance.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Install Docker Engine&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The server needs Docker installed to run the application as a container. Containerization ensures that the application runs consistently across environments. To install Docker Engine, follow this &lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer"&gt;tutorial&lt;/a&gt;, which covers the installation process for various operating systems.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create a Non-Root User for SSH Access&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To securely manage server access from GitHub Actions, create a dedicated user account rather than using the root user. This setup limits access and follows best practices for securing server resources. If you’re logged in as the root user, you can create a new user by running the following command (assuming a Linux server):&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;useradd &lt;span class="nt"&gt;-m&lt;/span&gt; deploy-user
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;You can find more information about creating user accounts in this &lt;a href="https://linuxize.com/post/how-to-create-users-in-linux-using-the-useradd-command/" rel="noopener noreferrer"&gt;article&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;After creating the user, switch to the user’s directory:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;su - deploy-user
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Add the User to the Docker Group&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To allow the &lt;code&gt;deploy-user&lt;/code&gt; to run Docker commands without needing &lt;code&gt;sudo&lt;/code&gt;, add the user to the Docker group:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; docker deploy-user
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This configuration lets the GitHub Actions pipeline execute Docker commands seamlessly. You can learn more about that &lt;a href="https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Set Up SSH Key Pair for GitHub Actions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You’ll need an SSH private key from the GitHub Actions workflow to connect to the server. You can use the existing key pair from the server's creation, or generate a new one specifically for this pipeline. This &lt;a href="https://www.ssh.com/academy/ssh/keygen" rel="noopener noreferrer"&gt;article&lt;/a&gt; details how to create an SSH key pair.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure AWS CLI on the Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To pull images from Amazon ECR directly from the server, you need to set up the AWS CLI using the IAM user created in the ECR setup. If you haven’t already, install the AWS CLI on the server. You can follow this &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;guide&lt;/a&gt; for installation instructions based on your operating system.&lt;/p&gt;

&lt;p&gt;Run the following command to configure the AWS CLI:&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;You'll be prompted to enter the IAM user's &lt;code&gt;AWS Access Key ID&lt;/code&gt;, &lt;code&gt;AWS Secret Access Key&lt;/code&gt;, &lt;code&gt;Default region name&lt;/code&gt;, and &lt;code&gt;Default output format&lt;/code&gt;. The region should match the AWS region where your ECR repository is located.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  GitHub Action Workflow
&lt;/h2&gt;

&lt;p&gt;The next step is setting up a GitHub Action workflow that automates the deployment process. The workflow will perform the following: checking out the code, building a Docker image, pushing it to the Amazon ECR repository, and then pulling and running the image on the server. Below is a breakdown of the script and an explanation of each step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;GitHub Action Script&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's what the GitHub Action script looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to staging&lt;/span&gt;
&lt;span class="na"&gt;on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;push&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;branches&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;staging&lt;/span&gt;

&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;AWS_REGION&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-region&lt;/span&gt;           
  &lt;span class="na"&gt;ECR_REGISTRY_URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;registry-url&lt;/span&gt;

&lt;span class="na"&gt;jobs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;deploy-api&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy Backend API&lt;/span&gt;
    &lt;span class="na"&gt;runs-on&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ubuntu-latest&lt;/span&gt;

    &lt;span class="na"&gt;permissions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;id-token&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;write&lt;/span&gt;
      &lt;span class="na"&gt;contents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;read&lt;/span&gt;

    &lt;span class="na"&gt;steps&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;

    &lt;span class="c1"&gt;# Step 1: Checkout the code&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Checkout the code&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

    &lt;span class="c1"&gt;# Step 2: Configure AWS credentials&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Configure AWS credentials&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/configure-aws-credentials@v4&lt;/span&gt; 
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;role-to-assume&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::123456789012:role/my-github-actions-role&lt;/span&gt;
        &lt;span class="na"&gt;aws-region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ env.AWS_REGION }}&lt;/span&gt;

    &lt;span class="c1"&gt;# Step 3: Login to Amazon ECR&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Login to Amazon ECR&lt;/span&gt;
      &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;login-ecr&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;aws-actions/amazon-ecr-login@v2&lt;/span&gt;  

    &lt;span class="c1"&gt;# Step 4: Build, Tag, and Push Docker Image to ECR&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Build, tag, and push Docker image to ECR&lt;/span&gt;
      &lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
        &lt;span class="s"&gt;TAG=$(git rev-parse --short ${{ github.sha }})&lt;/span&gt;
        &lt;span class="s"&gt;docker build -t $ECR_REGISTRY_URL/${{ secrets.APP_NAME }}:$TAG .   &lt;/span&gt;
        &lt;span class="s"&gt;docker tag $ECR_REGISTRY_URL/${{ secrets.APP_NAME }}:$TAG $ECR_REGISTRY_URL/${{ secrets.APP_NAME }}:latest   &lt;/span&gt;
        &lt;span class="s"&gt;docker push $ECR_REGISTRY_URL/${{ secrets.APP_NAME }}:$TAG&lt;/span&gt;
        &lt;span class="s"&gt;docker push $ECR_REGISTRY_URL/${{ secrets.APP_NAME }}:latest&lt;/span&gt;

    &lt;span class="c1"&gt;# Step 5: SSH into the server, pull the latest image, and restart the container&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deploy to server&lt;/span&gt;
      &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;appleboy/ssh-action@v1.0.3&lt;/span&gt;
      &lt;span class="na"&gt;with&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SSH_HOST }}&lt;/span&gt;
        &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SSH_USER }}&lt;/span&gt;
        &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ secrets.SSH_PRIVATE_KEY }}&lt;/span&gt;
        &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;aws ecr get-login-password --region ${{ env.AWS_REGION }} | docker login --username AWS --password-stdin ${{ env.ECR_REGISTRY_URL }}&lt;/span&gt;
          &lt;span class="s"&gt;docker pull ${{ env.ECR_REGISTRY_URL }}/${{ secrets.APP_NAME }}:latest&lt;/span&gt;
          &lt;span class="s"&gt;docker stop ${{ secrets.APP_NAME }}&lt;/span&gt;
          &lt;span class="s"&gt;docker system prune -f&lt;/span&gt;
          &lt;span class="s"&gt;docker run --env-file .env --name ${{ secrets.APP_NAME }} --restart unless-stopped -d -p 80:${{ secrets.APP_PORT }}  ${{ env.ECR_REGISTRY_URL }}/${{ secrets.APP_NAME }}:latest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;&lt;em&gt;Explanation of Each Step&lt;/em&gt;&lt;/strong&gt;
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Checkout the Code&lt;/strong&gt;: This step uses the &lt;a href="https://github.com/actions/checkout" rel="noopener noreferrer"&gt;&lt;code&gt;actions/checkout&lt;/code&gt;&lt;/a&gt; action to clone the repository into the GitHub Actions runner. It ensures the latest code from the &lt;code&gt;staging&lt;/code&gt; branch is available for building.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Configure AWS Credentials&lt;/strong&gt;: Here, the &lt;a href="https://github.com/aws-actions/configure-aws-credentials" rel="noopener noreferrer"&gt;&lt;code&gt;aws-actions/configure-aws-credentials&lt;/code&gt;&lt;/a&gt; action is used to assume an IAM role in AWS that allows access to ECR. This configuration allows GitHub Actions to authenticate securely using GitHub's OIDC provider.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Login to Amazon ECR&lt;/strong&gt;: The &lt;a href="https://github.com/aws-actions/amazon-ecr-login" rel="noopener noreferrer"&gt;&lt;code&gt;aws-actions/amazon-ecr-login&lt;/code&gt;&lt;/a&gt; action logs into the ECR registry, allowing Docker commands to push and pull images from the ECR repository.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Build, Tag, and Push Docker Image to ECR&lt;/strong&gt;:&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* The image is built using `docker build` and tagged with the short Git commit hash for versioning.

* The image is then tagged `latest` for easy access to the most recent build.

* Both the versioned tag and the `latest` tag are pushed to the ECR registry.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Deploy to Server&lt;/strong&gt;:&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* The `appleboy/ssh-action` action is used to SSH into the server.

* It logs in to the ECR registry, pulls the latest image, stops the running container, performs a system cleanup, and then runs the new image.

* The `--restart unless-stopped` flag ensures that the container automatically restarts if the server restarts.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;GitHub Secrets&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You need to configure the following secrets in your GitHub repository for this workflow to function correctly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;APP_NAME&lt;/code&gt;: The name of the application, used as the ECR repository name.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;SSH_HOST&lt;/code&gt;: The IP address or domain name of the server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;SSH_USER&lt;/code&gt;: The username for SSH access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;SSH_PRIVATE_KEY&lt;/code&gt;: The private key used for SSH authentication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;APP_PORT&lt;/code&gt;: The port on which the application should be exposed (e.g., &lt;code&gt;80&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once you commit to the specified branch, this workflow will be triggered automatically. It will build the Docker image, push it to ECR, and deploy it to the server. You can monitor the progress and view the workflow's execution details in the Actions tab of the repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Pitfalls and Troubleshooting
&lt;/h2&gt;

&lt;p&gt;Here are some potential issues that may arise while setting up this workflow and how to troubleshoot them:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Access Denied When Pushing to ECR&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you encounter an access denied error while pushing images to ECR, confirm that the GitHub Actions workflow has the correct permissions and that the ECR repository policy is set to allow your IAM role or user. Also, verify that the workflow script correctly references the ECR repository URI.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SSH Connection Issues&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the SSH connection to the server fails, ensure that:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* The SSH private key added to the GitHub Secrets matches the public key on the server.

* The server's firewall settings allow inbound connections on the specified SSH port (usually port 22).

* The `deploy-user` has been correctly set up and has the required permissions to execute commands.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Docker Commands Failing on the Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If Docker commands fail on the server, ensure that:&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;* The &lt;code&gt;deploy-user&lt;/code&gt; is part of the Docker group to allow non-sudo Docker commands.

&lt;ul&gt;
&lt;li&gt;There are enough system resources (CPU, memory, disk space) for Docker operations, especially during the image pull or run stages.
&lt;/li&gt;
&lt;/ul&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;


Conclusion
&lt;/h2&gt;


&lt;p&gt;Automating Docker deployments using GitHub Actions and Amazon ECR streamlines the process of building, deploying, and managing containerized applications. By leveraging this workflow, you can separate the build process from the deployment, reduce the load on your server, and maintain a clean, versioned history of your application images in ECR.&lt;/p&gt;

&lt;p&gt;In this guide, we walked through setting up an Amazon ECR repository, configuring a server, and creating a GitHub Actions workflow that automatically builds and pushes Docker images to ECR, and then deploys them to the server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thank You For Reading
&lt;/h2&gt;

&lt;p&gt;You can follow me on &lt;a href="https://www.linkedin.com/in/alhazan-mubarak/" rel="noopener noreferrer"&gt;&lt;strong&gt;LinkedIn&lt;/strong&gt;&lt;/a&gt; and subscribe to my &lt;a href="https://www.youtube.com/@poly4" rel="noopener noreferrer"&gt;&lt;strong&gt;YouTube Channel,&lt;/strong&gt;&lt;/a&gt; where I share more valuable content. Also, Let me know your thoughts in the comment section.&lt;/p&gt;

&lt;p&gt;Happy Deploying 🚀&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Seamlessly Migrating Frontend Deployment Across AWS Accounts</title>
      <dc:creator>Mubarak Alhazan</dc:creator>
      <pubDate>Thu, 03 Oct 2024 12:08:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/seamlessly-migrating-frontend-deployment-across-aws-accounts-54lo</link>
      <guid>https://dev.to/aws-builders/seamlessly-migrating-frontend-deployment-across-aws-accounts-54lo</guid>
      <description>&lt;h2&gt;
  
  
  How to Move S3 Buckets, CloudFront Distributions, and Route 53 Hosted Zones
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftt0vcy4o8xslbndjbz65.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftt0vcy4o8xslbndjbz65.gif" alt="migration gif" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;I recently worked on a project to migrate an entire application deployment from one AWS account to another. While migrating individual resources between accounts is common, I couldn't find a clear, straightforward guide on how to move a frontend deployment. This left me piecing together different AWS resources and best practices to ensure a smooth transition.&lt;/p&gt;

&lt;p&gt;The frontend we were working with is a React application deployed using a typical AWS architecture: an S3 bucket to host the static files, a CloudFront distribution placed in front to improve performance and security, and a Route 53 hosted zone to route traffic to the CloudFront distribution. Given how common this setup is in AWS, I wanted to share the step-by-step process of migrating this architecture across AWS accounts.&lt;/p&gt;

&lt;p&gt;In this article, we’ll walk through how to efficiently migrate the entire Frontend architecture while ensuring minimal downtime and a smooth transition for our deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Pre-Migration Checklist&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before starting the migration, it's important to identify the key AWS resources involved and ensure we have everything prepared to make the transition seamless. Here's a checklist of what we'll need to migrate:&lt;/p&gt;

&lt;h3&gt;
  
  
  Resources To Migrate
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;S3 Buckets&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In our case, we have four S3 buckets, each holding the static assets for different application portals.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;CloudFront Distributions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each S3 bucket has an associated CloudFront distribution, improving performance and security. Like the S3 buckets, we will migrate four CloudFront distributions — one for each bucket.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;ACM Certificate&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We need an ACM certificate to enable SSL and secure our frontend applications. Since all the applications are under the same domain, we can create a single ACM certificate and use it across all CloudFront distributions. This will ensure that each application serves traffic over HTTPS.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Route 53 DNS Records&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For each subdomain, there is a corresponding DNS record in Route 53 that routes traffic to the correct CloudFront distribution. Additionally, we'll need to create DNS records to validate the ACM certificate.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this guide, we’ll walk through the process for one frontend app, but the same steps can be repeated for the remaining applications.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Access and Permissions&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;To migrate these resources without issues, you need access to both the &lt;strong&gt;source AWS account&lt;/strong&gt; (where the resources currently reside) and the &lt;strong&gt;target AWS account&lt;/strong&gt; (where the resources will be moved). Adequate permissions to create, modify, and delete the mentioned resources in both accounts will ensure a smooth migration. Here is an IAM &lt;a href="https://gist.github.com/poly4concept/ff7007b95ac7067f53af6cf552ad8d2a" rel="noopener noreferrer"&gt;policy&lt;/a&gt; for all the access required for this migration.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Migrating S3 Buckets&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The first step in the migration is moving the static assets from the S3 buckets in the source AWS account to new buckets in the target AWS account. Here's how we approach this migration:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Creating a New Bucket in the Target Account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since S3 bucket names must be globally unique, you might not be able to create a new bucket with the same name as the one in the source account.&lt;/p&gt;

&lt;p&gt;In addition, make sure that the new bucket is &lt;strong&gt;configured for static website hosting&lt;/strong&gt;, which is crucial for serving your React application. You can enable this feature via the AWS Console or the AWS CLI. You can refer to this &lt;a href="https://youtu.be/D2p2nwKvqHs?feature=shared&amp;amp;t=77" rel="noopener noreferrer"&gt;resource&lt;/a&gt; to learn more about setting up an S3 bucket for static website hosting.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Copying Static Assets to the Target Bucket&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the new bucket is created, the next step is to transfer the static assets (such as HTML, CSS, JS, and media files) from the source bucket to the target bucket. You can achieve this with the following method:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Method 1: Using AWS CLI&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;aws s3 &lt;span class="nb"&gt;sync &lt;/span&gt;s3://source-bucket-name s3://destination-bucket-name &lt;span class="nt"&gt;--acl&lt;/span&gt; bucket-owner-full-control
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;This command ensures that all files from the source bucket are copied to the target bucket, with appropriate access permissions (&lt;code&gt;bucket-owner-full-control&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Method 2: Manual Download and Upload&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For smaller sets of files, you can download the assets from the source bucket and upload them to the new bucket via the AWS Console. While this method is simple, it’s not ideal for large datasets or continuous deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Method 3: Using a CI/CD Pipeline to Deploy to the Target Bucket&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In our case, an existing CI/CD pipeline deployed our React frontend directly to the S3 bucket from GitHub. Instead of manually copying the files, we updated the pipeline to deploy to the new bucket in the target account. After switching the pipeline’s destination, we triggered a redeployment to ensure that the latest version of the application was in the new S3 bucket.&lt;/p&gt;

&lt;p&gt;This method is preferable because having a CI/CD pipeline allows for ongoing updates to the application. You can follow this &lt;a href="https://youtu.be/D2p2nwKvqHs?si=wKtglh4MVEVdS9Qd" rel="noopener noreferrer"&gt;resource&lt;/a&gt; to learn how to create a CodePipeline that deploys a React app to an S3 bucket.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Migrating ACM Certificates
&lt;/h2&gt;

&lt;p&gt;One important aspect of migrating a frontend deployment is ensuring SSL certificates are set up to secure your applications. In AWS, these certificates are managed through the AWS Certificate Manager (ACM), but it's important to note that ACM certificates &lt;strong&gt;cannot be transferred&lt;/strong&gt; between AWS accounts. This means we need to create a new certificate in the target account.&lt;/p&gt;

&lt;p&gt;To start, navigate to the &lt;strong&gt;Certificate Manager&lt;/strong&gt; in the AWS Console of the target account. Select &lt;strong&gt;Request a certificate&lt;/strong&gt; and choose a &lt;strong&gt;public certificate&lt;/strong&gt;, as it will be used with CloudFront.&lt;/p&gt;

&lt;p&gt;As part of the request, you need to add subdomains the certificate will secure. These subdomains represent the frontend applications, which will later be served through CloudFront distributions. Here’s a visual representation of what that looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7uyrmy91mvdfe7dhxlyh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7uyrmy91mvdfe7dhxlyh.png" alt="acm creation" width="800" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After adding the necessary subdomains, you can leave the other settings at their defaults and proceed to request the certificate.&lt;/p&gt;

&lt;p&gt;At this point, the certificate request will be marked as "Validation in progress." Since we selected &lt;strong&gt;DNS validation&lt;/strong&gt;, AWS will provide &lt;strong&gt;CNAME records&lt;/strong&gt; that need to be added to the DNS configuration of your domain. This step is crucial, as it verifies ownership of the subdomains. The pending validation status will appear like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7yol3rhxa616u240449.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7yol3rhxa616u240449.png" alt="validation status" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you’ll need to add the provided DNS validation records to the &lt;strong&gt;hosted zone&lt;/strong&gt; in the source account. This is required to complete the validation process. Take note that &lt;strong&gt;Route 53 automatically pre-fills the base domain&lt;/strong&gt; when adding the CNAME records, so you only need to enter the part before it.&lt;/p&gt;

&lt;p&gt;Once the DNS records are in place, ACM will automatically detect the validation, and the certificate’s status will change to &lt;strong&gt;Issued&lt;/strong&gt;. With the certificate validated, you’re now ready to attach it to your CloudFront distributions in the following steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating CloudFront Distributions
&lt;/h2&gt;

&lt;p&gt;Similar to ACM certificates, CloudFront distributions cannot be directly transferred between AWS accounts. As a result, we’ll need to recreate the distribution in the target account.&lt;/p&gt;

&lt;p&gt;To begin, navigate to the &lt;strong&gt;CloudFront&lt;/strong&gt; section in the AWS Console of the target account and select &lt;strong&gt;Create Distribution&lt;/strong&gt;. During the creation process, you’ll need to choose the &lt;strong&gt;origin domain&lt;/strong&gt;, which in this case is the S3 bucket containing the static assets for your application. Make sure you select the correct bucket and use &lt;strong&gt;website endpoint&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Once the origin is set, adjust the other settings based on your specific requirements for caching, behavior, or access logging.&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;Settings&lt;/strong&gt; section of the distribution setup, you’ll need to specify the &lt;strong&gt;alternate domain name (CNAME)&lt;/strong&gt;, which is the subdomain that you want to associate with the distribution (e.g. &lt;a href="http://poly4.dev" rel="noopener noreferrer"&gt;blog.poly4.dev&lt;/a&gt;). Additionally, you’ll need to attach the &lt;strong&gt;SSL certificate&lt;/strong&gt; that was created in the previous step. This ensures that the subdomain can securely communicate with the CloudFront distribution over HTTPS. Here's an example of what it looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffw7i40h81k1ldfgyihjh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffw7i40h81k1ldfgyihjh.png" alt="Cloudfront subdomains" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before finalizing the distribution creation, there’s an important step: &lt;strong&gt;Remove the subdomain from the existing distribution&lt;/strong&gt; in the source account. This is because a subdomain cannot be assigned to two CloudFront distributions at the same time. You may need to wait for the changes in the source distribution to deploy.&lt;/p&gt;

&lt;p&gt;Once you’ve created the new distribution, copy the &lt;strong&gt;distribution endpoint&lt;/strong&gt; URL, which looks something like &lt;a href="http://d1234abcdef.cloudfront.net" rel="noopener noreferrer"&gt;&lt;code&gt;d1234abcdef.cloudfront.net&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The final step is to update the DNS records in the &lt;strong&gt;Route 53 hosted zone&lt;/strong&gt; for the subdomain. Replace the old CloudFront distribution endpoint with the new one, so that traffic to the subdomain routes to the newly created CloudFront distribution.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating Route 53 Hosted Zones
&lt;/h2&gt;

&lt;p&gt;At this stage, resources in the target account — such as S3, CloudFront, and ACM certificates — serve traffic for the application. However, to minimize downtime, the DNS records in the &lt;strong&gt;source account&lt;/strong&gt; were updated first, ensuring that the traffic routing remained uninterrupted during the transition.&lt;/p&gt;

&lt;p&gt;The next step is to migrate the hosted zones to the target account to manage DNS records within the same account as other resources. Navigate to &lt;strong&gt;Route 53&lt;/strong&gt; in the AWS Console of the target account and select &lt;strong&gt;Create Hosted Zone&lt;/strong&gt;. Use the &lt;strong&gt;same domain name&lt;/strong&gt; as the one in the source account.&lt;/p&gt;

&lt;p&gt;After creating the new hosted zone, you need to manually add all the DNS records from the &lt;strong&gt;source account&lt;/strong&gt; to the new hosted zone in the &lt;strong&gt;target account&lt;/strong&gt;. Each record must match exactly, including the record types (A, CNAME, TXT, etc.), values, TTLs, and other configurations. If you have a large number of DNS records, manually recreating them can be time-consuming. Here is a &lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-migrating.html" rel="noopener noreferrer"&gt;resource&lt;/a&gt; that explains how to export and import DNS records between hosted zones using the AWS CLI.&lt;/p&gt;

&lt;p&gt;Once the records are added, compare the &lt;strong&gt;DNS records&lt;/strong&gt; between the source and target accounts to ensure everything matches.&lt;/p&gt;

&lt;p&gt;The final step is to update the &lt;strong&gt;name server records&lt;/strong&gt; at your domain registrar to point to the new hosted zone in the target account. The name servers are provided by AWS when you create the hosted zone and can be found in the &lt;strong&gt;Hosted Zone details&lt;/strong&gt; tab. This ensures that future traffic is routed through the new hosted zone in the target account.&lt;/p&gt;

&lt;h2&gt;
  
  
  Post-Migration
&lt;/h2&gt;

&lt;p&gt;After successfully migrating all resources and updating DNS records, it is essential to allow time for DNS propagation. It is recommended to wait at least 48 hours to ensure traffic has fully propagated to the new resources. During this time, monitor your application to verify that everything is functioning as expected.&lt;/p&gt;

&lt;p&gt;After confirming that the migration is complete and the new resources are handling traffic smoothly, you can optionally &lt;strong&gt;delete&lt;/strong&gt; the resources in the source account. This ensures no residual costs or conflicts from the old infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Migrating a front-end deployment between AWS accounts can seem complex due to the individual services involved—S3, CloudFront, ACM, and Route 53. However, by breaking down the process step-by-step, it becomes more manageable.&lt;/p&gt;

&lt;p&gt;In this guide, we walked through the necessary stages to ensure a smooth migration, from setting up new resources in the target account to updating DNS records and minimizing downtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Thank you for reading
&lt;/h2&gt;

&lt;p&gt;You can follow me on &lt;a href="https://www.linkedin.com/in/alhazan-mubarak/" rel="noopener noreferrer"&gt;&lt;strong&gt;Linkedin&lt;/strong&gt;&lt;/a&gt; and subscribe to my &lt;a href="https://www.youtube.com/@poly4" rel="noopener noreferrer"&gt;&lt;strong&gt;YouTube Channel&lt;/strong&gt;&lt;/a&gt; where I also share more valuable content. Also, Let me know your thoughts in the comment section.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>frontend</category>
      <category>migration</category>
    </item>
    <item>
      <title>Efficiently Testing Asynchronous React Hooks with Vitest</title>
      <dc:creator>Mubarak Alhazan</dc:creator>
      <pubDate>Wed, 10 Apr 2024 20:59:20 +0000</pubDate>
      <link>https://dev.to/poly4/efficiently-testing-asynchronous-react-hooks-with-vitest-1hll</link>
      <guid>https://dev.to/poly4/efficiently-testing-asynchronous-react-hooks-with-vitest-1hll</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;While working on a fairly large frontend project, a seemingly minor change in the backend's error message structure triggered a cascade of changes across numerous files on the front end, making it a notably hard task to rectify. Determined to spare myself and my team from such headaches in the future, I created a custom &lt;code&gt;useApi&lt;/code&gt; hook. This hook was tailored for handling API calls throughout the application, ensuring that any future changes to data structure could be managed from a single file.&lt;/p&gt;

&lt;p&gt;Realizing the pivotal role of this hook across the codebase, the necessity of writing robust tests became apparent, In this article, we'll walk through the process of efficiently testing asynchronous React Hooks, drawing insights from my experience with this &lt;code&gt;useApi&lt;/code&gt; hook.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dependencies
&lt;/h2&gt;

&lt;p&gt;We should have a React project set up and running. We can initialize the project with Vite using the command &lt;code&gt;npm create vite@latest&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To replicate this test, we need to install the following dependencies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://vitest.dev/guide/" rel="noopener noreferrer"&gt;Vitest&lt;/a&gt;: our main testing framework&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.npmjs.com/search?q=jsdom" rel="noopener noreferrer"&gt;JSDOM&lt;/a&gt;: DOM environment for running our tests&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://testing-library.com/docs/react-testing-library/intro/" rel="noopener noreferrer"&gt;React Testing Library&lt;/a&gt;: provides utilities to make testing easier&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://mswjs.io/docs" rel="noopener noreferrer"&gt;MSW&lt;/a&gt;: library to mock API calls for the tests.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To do so, we run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;npm&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;install&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-D&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;vitest&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;jsdom&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="nx"&gt;testing-library/react&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;msw&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="c"&gt;#OR&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;yarn&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;add&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-D&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;vitest&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;jsdom&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="nx"&gt;testing-library/react&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;msw&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;code&gt;vitest.config.js&lt;/code&gt; (or &lt;code&gt;vite.config.js&lt;/code&gt; for Vite projects), we add the following &lt;code&gt;test&lt;/code&gt; object:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="nf"&gt;defineConfig&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="c1"&gt;//...&lt;/span&gt;
  &lt;span class="na"&gt;test&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;global&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;jsdom&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we can run our test with &lt;code&gt;npx vitest&lt;/code&gt; .&lt;/p&gt;

&lt;h2&gt;
  
  
  The &lt;code&gt;useApi&lt;/code&gt; Hook
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;axios&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;axios&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;useApi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setData&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;([]);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;loading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setError&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;callApi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;axios&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;method&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;headers&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;setData&lt;/span&gt;&lt;span class="p"&gt;([]);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="nf"&gt;setData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;setError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nf"&gt;setLoading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;};&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;loading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;callApi&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The hook is designed to handle API calls within the application and offers flexibility to adapt to various API scenarios. It accepts the following parameters: &lt;code&gt;url&lt;/code&gt; , &lt;code&gt;method&lt;/code&gt;, &lt;code&gt;body&lt;/code&gt; , and &lt;code&gt;headers&lt;/code&gt; .&lt;/p&gt;

&lt;p&gt;The hook uses &lt;code&gt;axios&lt;/code&gt; to execute the API call to the specified URL which handles gracefully cases when certain parameters like the request body or headers are not provided.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;useApi&lt;/code&gt; hook returns an array containing the fetched data, a boolean flag indicating the loading state, any encountered errors, and a function &lt;code&gt;callApi&lt;/code&gt; to initiate the API request.&lt;/p&gt;

&lt;p&gt;A sample usage of this hook to make a &lt;code&gt;POST&lt;/code&gt; request will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;createResponse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;creating&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;createError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;createRecord&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useApi&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://jsonplaceholder.typicode.com/posts&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;POST&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, &lt;code&gt;createResponse&lt;/code&gt; holds the response data from the API call, &lt;code&gt;creating&lt;/code&gt; indicates whether the request is currently in progress, &lt;code&gt;createError&lt;/code&gt; captures any errors encountered during the request, and &lt;code&gt;createRecord&lt;/code&gt; is the function to initiate the API call.&lt;/p&gt;

&lt;p&gt;By encapsulating API logic within the reusable &lt;code&gt;useApi&lt;/code&gt; hook, we can enhance code maintainability, improve readability, and ensure consistent handling of asynchronous operations throughout our application.&lt;/p&gt;

&lt;p&gt;Now let's test 🪄:&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the Hook
&lt;/h2&gt;

&lt;p&gt;We will test a scenario when the &lt;code&gt;useApi&lt;/code&gt; makes a successful &lt;code&gt;GET&lt;/code&gt; request.&lt;/p&gt;

&lt;h3&gt;
  
  
  Default Value Test
&lt;/h3&gt;

&lt;p&gt;We start by testing the default return value of the &lt;code&gt;useApi&lt;/code&gt; hook. Using &lt;code&gt;renderHook&lt;/code&gt; from &lt;code&gt;@testing-library/react&lt;/code&gt;. &lt;a href="https://testing-library.com/docs/react-testing-library/api/#renderhook" rel="noopener noreferrer"&gt;&lt;code&gt;renderHook&lt;/code&gt;&lt;/a&gt; returns an object instance containing &lt;code&gt;result&lt;/code&gt; property from which we can access the hook's return value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;renderHook&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@testing-library/react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useApi&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./useApi&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;useApi&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;should fetch data on callApi for GET request&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;renderHook&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
          &lt;span class="nf"&gt;useApi&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://api.example.com/items&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GET&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Data should be null initially&lt;/span&gt;
        &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Loading state should be false initially&lt;/span&gt;
        &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Error should be an empty string initially&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Testing the&lt;/strong&gt;&lt;code&gt;callApi&lt;/code&gt; Function
&lt;/h3&gt;

&lt;p&gt;Next, we test the behaviour when the &lt;code&gt;callApi&lt;/code&gt; function is invoked. We use &lt;code&gt;act&lt;/code&gt; to simulate the function call and &lt;code&gt;waitFor&lt;/code&gt; to await its asynchronous result. Here's the test case:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;act&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;renderHook&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;waitFor&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@testing-library/react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useApi&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./useApi&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mockData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;item&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Leanne Graham&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;item&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Ervin Howell&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;useApi&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;should fetch data on callApi for GET request&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;renderHook&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
          &lt;span class="nf"&gt;useApi&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://api.example.com/items&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;GET&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

        &lt;span class="nf"&gt;act&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;]();&lt;/span&gt;&lt;span class="c1"&gt;// Invoke callApi function&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Loading state should be true during API call&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;waitFor&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="c1"&gt;// Wait for API call to complete&lt;/span&gt;
          &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Loading state should be false after API call&lt;/span&gt;
          &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mockData&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Data should match mock data&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the code above, we used &lt;code&gt;waitFor&lt;/code&gt; to await the result of the &lt;code&gt;callApi&lt;/code&gt; function because it is an asynchronous function. &lt;code&gt;waitFor&lt;/code&gt; accepts a callback and returns a Promise that resolves when the callback executes successfully.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mocking the Api Call
&lt;/h3&gt;

&lt;p&gt;In the above test scenarios, we directly called the API, which is not ideal for unit tests as it can lead to dependencies on external services and unpredictable test outcomes. Instead, we should mock the API call to isolate the behaviour of the hook and ensure reliable and consistent testing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Using MSW for API Mocking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To mock the &lt;code&gt;GET&lt;/code&gt; request, we'll define a mock handler using MSW like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;afterAll&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;afterEach&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;beforeAll&lt;/span&gt; &lt;span class="cm"&gt;/**... */&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;vitest&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;HttpResponse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;msw&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;setupServer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;msw/node&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mockData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;item&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Leanne Graham&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;item&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Ervin Howell&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handlers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://api.example.com/items&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;HttpResponse&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mockData&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;

&lt;span class="c1"&gt;// Set up the mock server with the defined handlers&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;setupServer&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="nx"&gt;handlers&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;useApi&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Start the mock server before running the tests&lt;/span&gt;
    &lt;span class="nf"&gt;beforeAll&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;onUnhandledRequest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;
    &lt;span class="c1"&gt;// Reset mock server handlers after each test&lt;/span&gt;
    &lt;span class="nf"&gt;afterEach&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resetHandlers&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="c1"&gt;// Close the mock server after all tests have run&lt;/span&gt;
    &lt;span class="nf"&gt;afterAll&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

    &lt;span class="cm"&gt;/** Tests will go here **/&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the above code:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;We define a mock handler using &lt;code&gt;http.get()&lt;/code&gt; from MSW. This handler intercepts GET requests and responds with the &lt;code&gt;mockData&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We then set up a mock server using &lt;code&gt;setupServer()&lt;/code&gt; from MSW and pass the defined handlers to it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Just before running the tests, we start the mock server (&lt;code&gt;server.listen()&lt;/code&gt;), ensuring that it intercepts requests during test execution.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After each test, we reset the mock server handlers to ensure a clean state for the next test.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, after all tests have run, we close the mock server to clean up resources (&lt;code&gt;server.close()&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that's it, we can test the hook now.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other Scenarios
&lt;/h3&gt;

&lt;p&gt;Using a similar structure, we can write a test for other scenarios for the &lt;code&gt;useApi&lt;/code&gt; hook. Let's consider a scenario where a POST request fails, such as due to authentication issues:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;afterAll&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;afterEach&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;beforeAll&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;test&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;vitest&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;HttpResponse&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;http&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;msw&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;setupServer&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;msw/node&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;act&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;renderHook&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;waitFor&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@testing-library/react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useApi&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./useApi&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;handlers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
  &lt;span class="c1"&gt;//other handlers...&lt;/span&gt;
  &lt;span class="nx"&gt;http&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://api.example.com/login&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;request&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;has&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;cookie&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;HttpResponse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;401&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}),&lt;/span&gt;
&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;setupServer&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="nx"&gt;handlers&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;useApi&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;beforeAll&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;onUnhandledRequest&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;}));&lt;/span&gt;
    &lt;span class="nf"&gt;afterEach&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resetHandlers&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="nf"&gt;afterAll&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;

   &lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;post request error&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;test&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;renderHook&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;
      &lt;span class="nf"&gt;useApi&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://api.example.com/login&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;post&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;""&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nf"&gt;act&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;]();&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;waitFor&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toBe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="c1"&gt;// Expect error message&lt;/span&gt;
      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Request failed with status code 401&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nf"&gt;expect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]).&lt;/span&gt;&lt;span class="nf"&gt;toEqual&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we learnt how to test asynchronous React hooks using the React Testing Library and Vitest package. We also learn how to mock requests using the MSW package.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Thank you for reading&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;I appreciate the time we spent together. I hope this content will be more than just text. Follow me on &lt;a href="https://www.linkedin.com/in/alhazan-mubarak/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt; and subscribe to my &lt;a href="https://www.youtube.com/@poly4" rel="noopener noreferrer"&gt;YouTube Channel&lt;/a&gt; where I plan to share more valuable content. Also, Let me know your thoughts in the comment section.&lt;/p&gt;

</description>
      <category>react</category>
      <category>testing</category>
      <category>frontend</category>
    </item>
  </channel>
</rss>
