<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Adrien Mornet</title>
    <description>The latest articles on DEV Community by Adrien Mornet (@adrienmornet).</description>
    <link>https://dev.to/adrienmornet</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/adrienmornet"/>
    <language>en</language>
    <item>
      <title>More secure Python Docker images with Amazon Linux 🔐</title>
      <dc:creator>Adrien Mornet</dc:creator>
      <pubDate>Tue, 21 May 2024 14:34:39 +0000</pubDate>
      <link>https://dev.to/aws-builders/more-secure-python-docker-images-with-amazon-linux-46f9</link>
      <guid>https://dev.to/aws-builders/more-secure-python-docker-images-with-amazon-linux-46f9</guid>
      <description>&lt;p&gt;Amazon Linux is a Linux distribution provided by AWS specifically optimized for running workloads on AWS Cloud. This distribution, entirely managed by Amazon teams, offers very high standards in terms of security. In this article I’ll explain why I prefer a Docker image based on Amazon Linux rather than Debian to run a python workload in the cloud.&lt;/p&gt;

&lt;h1&gt;
  
  
  Debian based Python Docker image
&lt;/h1&gt;

&lt;p&gt;Let’s say you want to create a Docker image to run Python code in version &lt;code&gt;3.9&lt;/code&gt;. Your first choice will probably be to start with a Docker image coming from Docker Hub &lt;code&gt;python:3.9&lt;/code&gt; right? I would have done the same, it’s the easiest way.&lt;/p&gt;

&lt;p&gt;Most of the images on the Docker Hub are based on Debian and this is the case for python.&lt;/p&gt;

&lt;p&gt;Your Dockerfile would probably look like this :&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.9  &lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app  &lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt ./  &lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="se"&gt;\-&lt;/span&gt;&lt;span class="nt"&gt;-no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt  

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .  &lt;/span&gt;

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; \[ "python", "./your-daemon-or-script.py" \]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Let’s build and push this Docker image to an ECR registry with “Scan on push” enabled. The “Scan on push” feature will run a security scan on your Docker image and see if there are any CVEs in it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2i0zi7nch89wvhcwa5e7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2i0zi7nch89wvhcwa5e7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the result of the security scan :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fg8eibnzo876s7iimw8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5fg8eibnzo876s7iimw8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s a lot for a fresh image containing only Debian libraries and a bit of python code, isn’t it?&lt;/p&gt;

&lt;h1&gt;
  
  
  Amazon Linux 2023 based Docker Image
&lt;/h1&gt;

&lt;p&gt;Let’s create a python Docker image now based on &lt;code&gt;amazonlinux:2023&lt;/code&gt; :&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;

&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; amazonlinux:2023  &lt;/span&gt;

&lt;span class="k"&gt;ENV&lt;/span&gt;&lt;span class="s"&gt; PYTHON\_VERSION\=3.9  &lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;&lt;span class="se"&gt;\-&lt;/span&gt;&lt;span class="nt"&gt;-mount&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;cache,target&lt;span class="o"&gt;=&lt;/span&gt;/var/cache/dnf &lt;span class="se"&gt;\\&lt;/span&gt;  
    dnf \-y update &amp;amp;&amp;amp; \\  
    dnf install \-y python${PYTHON\_VERSION} python${PYTHON\_VERSION}\-pip shadow\-utils git\-all findutils awscli tar &amp;amp;&amp;amp; \\  
    update\-alternatives \--install /usr/bin/python python /usr/bin/python${PYTHON\_VERSION} 20 &amp;amp;&amp;amp; \\  
    update\-alternatives \--set python /usr/bin/python${PYTHON\_VERSION} &amp;amp;&amp;amp; \\  
    dnf clean all  

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /usr/src/app  &lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt ./  &lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="se"&gt;\-&lt;/span&gt;&lt;span class="nt"&gt;-no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt  

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .  &lt;/span&gt;

&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; \[ "python", "./your-daemon-or-script.py" \]&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here is the result of it’s Security Scan :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwxqriblgkrq0a06tugw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpwxqriblgkrq0a06tugw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Much better!&lt;/p&gt;

&lt;h1&gt;
  
  
  Why is there less CVEs in the Amazon Linux image?
&lt;/h1&gt;

&lt;p&gt;The differences in how Debian and Amazon Linux are developed and maintained contribute to the feeling that Debian-based Docker images are less frequently patched and therefore have more unpatched CVEs.&lt;/p&gt;

&lt;p&gt;Debian is a community-driven distribution. Security updates for Debian are generally reliable, but the frequency and speed at which they are released can vary because it relies heavily on volunteering contributions.&lt;/p&gt;

&lt;p&gt;Python image on Docker Hub is also maintained by community which means that two different communities will have to patch a CVE: the Debian community and the python docker image community&lt;/p&gt;

&lt;p&gt;Amazon Linux is a distribution maintained by AWS. Amazon has a dedicated team that prioritizes security updates and patches, often releasing them quickly to ensure that their customers’ systems remain secure.&lt;/p&gt;

&lt;p&gt;Amazon’s centralized model and commercially driven approach to maintaining Amazon Linux ensures more consistent and rapid security updates which is why I think it’s better to use docker images based from Amazon Linux instead of Debian.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;If you liked this post, you can find more on my blog&lt;/em&gt;&lt;/strong&gt; &lt;a href="https://adrien-mornet.tech/" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;https://adrien-mornet.tech/&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;&lt;em&gt;🚀&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>docker</category>
      <category>cloud</category>
      <category>devops</category>
    </item>
    <item>
      <title>Managing database migrations in ArgoCD 🐙</title>
      <dc:creator>Adrien Mornet</dc:creator>
      <pubDate>Sun, 10 Mar 2024 15:58:01 +0000</pubDate>
      <link>https://dev.to/adrienmornet/managing-database-migrations-in-argocd-2mj2</link>
      <guid>https://dev.to/adrienmornet/managing-database-migrations-in-argocd-2mj2</guid>
      <description>&lt;p&gt;ArgoCD is a fantastic tool for doing GitOps in Kubernetes. Most of modern backend frameworks comes with a system that allows to version control database schema definition which is named migrations (e.g. &lt;a href="https://laravel.com/docs/master/migrations"&gt;Laravel&lt;/a&gt;, &lt;a href="https://docs.djangoproject.com/en/5.0/topics/migrations/"&gt;Django&lt;/a&gt; …). This system is very powerful because it allows the code for a new feature to be versioned together with the database schema required for the feature to function correctly. This can be used, for example, to create a table, add a column, etc. But how to handle these migrations in ArgoCD ? That’s what we’ll be looking at in this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a roll-out looks like
&lt;/h2&gt;

&lt;p&gt;In the context of a Kubernetes cluster whose deployments are automated by ArgoCD, Argo will be responsible for synchronising the desired state of the Kubernetes cluster described in the Kubernetes manifest with the actual state of the cluster. The desired state is stored on Git and contains the exact version of the Docker image (via its SHA or tag) which contains your application. When you’ll want to deploy a new version of the Docker image, you will push to the Manifest the new image version. The flow of the roll-out will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nabeg6jv5hr1e36daf4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4nabeg6jv5hr1e36daf4.png" alt="Image description" width="800" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ArgoCD Resource Hooks
&lt;/h2&gt;

&lt;p&gt;ArgoCD comes with a hook mechanism that enables actions to be taken during synchronisation, i.e. when it detects a difference between the state described in Git stored Kubernetes manifest and the actual state of the cluster. These actions are called &lt;a href="https://argo-cd.readthedocs.io/en/stable/user-guide/resource_hooks/"&gt;Resource Hooks&lt;/a&gt; and take the form of Kubernetes jobs.&lt;/p&gt;

&lt;p&gt;As we want our migrations to be executed just before the new version of the code is deployed, we will use the PreSync hook to run a job just before the synchronisation operation. Our new deployment flow will look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funbx192rkipoc4iw7ttb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funbx192rkipoc4iw7ttb.png" alt="Image description" width="800" height="112"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The implementation
&lt;/h2&gt;

&lt;p&gt;Here is an example of a Kubernetes Job that subscribe to ArgoCD PreSync hook that will run Laravel migrations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="nn"&gt;---&lt;/span&gt;
&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;batch/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Job&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;generateName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-app-migration-&lt;/span&gt;
  &lt;span class="na"&gt;annotations&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;argocd.argoproj.io/hook&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;PreSync"&lt;/span&gt;
    &lt;span class="na"&gt;argocd.argoproj.io/hook-delete-policy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BeforeHookCreation"&lt;/span&gt;
    &lt;span class="na"&gt;argocd.argoproj.io/job-cleanup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;keep"&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-laravel-app&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my-laravel-docker-image:new-tag&lt;/span&gt;
        &lt;span class="na"&gt;command&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;php"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;artisan"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;migrate"&lt;/span&gt;
      &lt;span class="na"&gt;restartPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Never&lt;/span&gt;
  &lt;span class="na"&gt;backoffLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Kubernetes job will be launched when ArgoCD detect a change in the Manifest. ArgoCD will wait the completion of the job (exit code 0) before syncing the rest of your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, ArgoCD’s Resource Hooks, especially the PreSync hook, streamline the execution of database migrations before deploying new code versions, ensuring seamless synchronization between application code and database schema. This approach enhances version control and consistency, offering a good solution for Kubernetes roll-out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;If you liked this post, you can find more on my blog&lt;/em&gt;&lt;/strong&gt; &lt;a href="https://adrien-mornet.tech/"&gt;&lt;strong&gt;&lt;em&gt;https://adrien-mornet.tech/&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;&lt;em&gt;🚀&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>argocd</category>
    </item>
    <item>
      <title>Effortless &amp; Serverless File Uploading: Unleashing the Power of AWS S3 presigned urls with Lambda 📂</title>
      <dc:creator>Adrien Mornet</dc:creator>
      <pubDate>Sun, 04 Feb 2024 18:57:19 +0000</pubDate>
      <link>https://dev.to/aws-builders/effortless-serverless-file-uploading-unleashing-the-power-of-aws-s3-presigned-urls-with-lambda-5704</link>
      <guid>https://dev.to/aws-builders/effortless-serverless-file-uploading-unleashing-the-power-of-aws-s3-presigned-urls-with-lambda-5704</guid>
      <description>&lt;p&gt;Today I’m going to present how you can build a robust file upload system through AWS S3 using an HTTP(S) endpoint and S3 presigned urls. We are going to use a Lambda function with a function URL to expose the HTTP(S) endpoint.&lt;/p&gt;

&lt;p&gt;The idea is to provide a fixed url to our user to let him upload a file to it. This fixed url (or HTTP(S) endpoint) will be a &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html" rel="noopener noreferrer"&gt;Lambda function url&lt;/a&gt;. Once the lambda has been launched, the function will generate an &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html" rel="noopener noreferrer"&gt;AWS S3 presigned url&lt;/a&gt; and redirect the user request transparently to it. We need to use an HTTP 307 code to make the redirect working with HTTP PUT method because this specific code &lt;em&gt;“guarantees that the method and the body will not be changed when the redirected request is made”&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tsflu4smffhejc3ff42.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tsflu4smffhejc3ff42.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With all this in place, our user will be able to upload a file with a simple curl : &lt;code&gt;curl -LX PUT -T "_/path/to/file_" "_LAMBDA_FUNCTION_URL_"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;To do all this, we’ll need to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; create an S3 bucket&lt;/li&gt;
&lt;li&gt; create a Lambda Function with an HTTP(S) endpoint&lt;/li&gt;
&lt;li&gt; grant S3 Access to the Lambda function&lt;/li&gt;
&lt;li&gt; make the Lambda creating the presigned url&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Create an AWS S3 bucket to save files
&lt;/h1&gt;

&lt;p&gt;Let’s create an S3 bucket &lt;code&gt;my-files-uploaded-with-lambda-and-presigned-urls&lt;/code&gt; :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64v0j1ywbjr07nmlp6cr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64v0j1ywbjr07nmlp6cr.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Create an AWS Lambda Function with an HTTP(S) endpoint
&lt;/h1&gt;

&lt;p&gt;Let’s create a python AWS Lambda Function called &lt;code&gt;file-uploads&lt;/code&gt; :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3p2c8botk9c2dsys89f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3p2c8botk9c2dsys89f.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;and enable function URL without authentication :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvn83vohyar48gewhwy6i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvn83vohyar48gewhwy6i.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Our Lambda is now created and available on the function url :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwp0k1k78eknvgkj2qy21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwp0k1k78eknvgkj2qy21.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Grant S3 Access to the Lambda function
&lt;/h1&gt;

&lt;p&gt;As “&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html" rel="noopener noreferrer"&gt;&lt;em&gt;a presigned URL is limited by the permissions of the user who creates it&lt;/em&gt;&lt;/a&gt;&lt;em&gt;“,&lt;/em&gt; we have to grant our Lambda to put a file in the bucket. Let’s edit the policy of the role assumed by the lambda (role can be found in Configuration -&amp;gt; Permissions -&amp;gt; Execution role) :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"logs:CreateLogGroup"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:logs:us-east-1:ACCOUNT_ID:*"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"logs:CreateLogStream"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"logs:PutLogEvents"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:logs:us-east-1:ACCOUNT_ID:log-group:/aws/lambda/file-uploads:*"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"s3:PutObject"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::my-files-uploaded-with-lambda-and-presigned-urls"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::my-files-uploaded-with-lambda-and-presigned-urls/*"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Make the Lambda creating the presigned url
&lt;/h1&gt;

&lt;p&gt;Here is the code I used to generate the presigned url and redirect the client with a HTTP 307 to it :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="c1"&gt;# Generate a presigned URL for the S3 object
&lt;/span&gt;    &lt;span class="n"&gt;s3_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s3&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;presigned_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;s3_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate_presigned_url&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;put_object&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Params&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Bucket&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;my-files-uploaded-with-lambda-and-presigned-urls&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Key&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;my_upload.txt&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Expires&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="n"&gt;HttpMethod&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;put&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Build the redirect response
&lt;/span&gt;    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;307&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# HTTP status code for PUT redirect
&lt;/span&gt;        &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;headers&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Location&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;presigned_url&lt;/span&gt;  &lt;span class="c1"&gt;# Set the Location header to the presigned URL
&lt;/span&gt;        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="c1"&gt;# Return the response to trigger the redirect
&lt;/span&gt;    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Upload a file
&lt;/h1&gt;

&lt;p&gt;Let’s test uploading a file with a simple curl request &lt;code&gt;curl -LX PUT -T "_/path/to/file_" "_LAMBDA_FUNCTION_URL_"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddcsoyoax6exoopy63sp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fddcsoyoax6exoopy63sp.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see the file in the bucket :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbji850l5mcnbgb7f7vxd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbji850l5mcnbgb7f7vxd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well done 🥳&lt;/p&gt;

&lt;h1&gt;
  
  
  To go further
&lt;/h1&gt;

&lt;p&gt;This article gives a simple overview of what can be done with lambda and S3 presigned urls. Of course, we could also add an authentication layer to prevent anyone from uploading to the URL with Cognito and/or API Gateway. You could also use a custom url with your custom domain name with API Gateway.The naming of the uploaded file could also be more elaborate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;If you liked this post, you can find more on my blog&lt;/em&gt;&lt;/strong&gt; &lt;a href="https://adrien-mornet.tech/" rel="noopener noreferrer"&gt;&lt;strong&gt;&lt;em&gt;https://adrien-mornet.tech/&lt;/em&gt;&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;&lt;em&gt;🚀&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>lambda</category>
      <category>s3</category>
    </item>
    <item>
      <title>Say Goodbye to Ugly Error Pages with Cloudfront 👋</title>
      <dc:creator>Adrien Mornet</dc:creator>
      <pubDate>Sun, 05 Nov 2023 18:25:37 +0000</pubDate>
      <link>https://dev.to/aws-builders/say-goodbye-to-ugly-error-pages-with-cloudfront-1m0a</link>
      <guid>https://dev.to/aws-builders/say-goodbye-to-ugly-error-pages-with-cloudfront-1m0a</guid>
      <description>&lt;p&gt;Who has never found themselves in the middle of surfing the internet when faced with an ugly error page ? These kind of pages are the default error pages of web servers like Apache or Nginx.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faouvkh26653n7od4xfct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faouvkh26653n7od4xfct.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;More recently, you may also come across the Cloudfront default errors :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxzj8vsnm5cw9mh7c1qd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuxzj8vsnm5cw9mh7c1qd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nothing could be worse in terms of user experience, could it? These generic error pages can be frustrating, uninformative, and can make users question the reliability of a website. In the world of DevOps, where continuous deployment and high availability are key objectives, the last thing you want is to present your users with a subpar experience during moments of downtime or unexpected errors.&lt;/p&gt;

&lt;p&gt;In a modern website architecture using a Content Delivery Network, the CDN is the last component capable of catching underlying errors and saving the day :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9xsp2h2kk5izc624z3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9xsp2h2kk5izc624z3k.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's configure AWS Cloudfront to catch errors from underlying components (and even an error in Cloudfront itself) !&lt;/p&gt;

&lt;p&gt;We're going to do this in just a few steps :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Create a pretty HTML error page&lt;/li&gt;
&lt;li&gt; Deploy the error page with AWS S3 static website hosting&lt;/li&gt;
&lt;li&gt; Add S3 static website as Cloudfront origin&lt;/li&gt;
&lt;li&gt; Set up Cloudfront error pages&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Create a pretty HTML error page
&lt;/h2&gt;

&lt;p&gt;Let's ask ChatGPT to create a nice error page for us 😏&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zg7gh166joo63nnvbbo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7zg7gh166joo63nnvbbo.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploy the error page with AWS S3 static website hosting
&lt;/h2&gt;

&lt;p&gt;Amazon S3, Amazon Web Services' object storage service, offers a really simple way to deploy static HTML pages which we are going to use. To make this simpler, we're going to do it via click ops on the AWS console.&lt;/p&gt;

&lt;p&gt;Create a public S3 bucket with public access :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rpiooill42gbsw58iwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3rpiooill42gbsw58iwg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsxmukycj3xxtpbe1tsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgsxmukycj3xxtpbe1tsg.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to your bucket, click &lt;em&gt;upload&lt;/em&gt; button and upload your index.html file :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jjz9oj1bokfwh33ssno.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8jjz9oj1bokfwh33ssno.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;em&gt;properties &amp;gt; Static website&lt;/em&gt; hosting and enable static website hosting for your bucket :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6b4qrh3khznt73y474s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6b4qrh3khznt73y474s.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will now have an HTTP endpoint to access your website :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zyuma2pzrcxws2uvj5s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zyuma2pzrcxws2uvj5s.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;em&gt;permissions &amp;gt; Bucket policy&lt;/em&gt; and allow everyone to get your files :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpdpk5nivsscuy6u7ls4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzpdpk5nivsscuy6u7ls4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you should be able to access your index.html with the static url of your S3 bucket (demo-error-page.s3-website-us-east-1.amazonaws.com in my case) :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xk1zkqa409yib38ouxv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8xk1zkqa409yib38ouxv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Add S3 static website as Cloudfront origin
&lt;/h2&gt;

&lt;p&gt;I'm going to assume here that you already have a working Cloudfront distribution with at least one origin :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrzv2uuhtrnaxhicb82f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffrzv2uuhtrnaxhicb82f.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's add our S3 website endpoint (the url which contains &lt;em&gt;s3-website&lt;/em&gt;, not the bucket endpoint) as a new Cloudfront origin :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkexfm4qkogqcwj7d9uz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkkexfm4qkogqcwj7d9uz.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we can add a new behavior for our Cloudfront distribution on a custom path :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkz12i5rudugnn8n2q6h3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkz12i5rudugnn8n2q6h3.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this way, requests whose path matches &lt;em&gt;/cloudfront-catched-errors-demo/*&lt;/em&gt; will be sent to our bucket :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76li9ookbsi5qv0vjrl9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76li9ookbsi5qv0vjrl9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Set up Cloudfront error pages
&lt;/h2&gt;

&lt;p&gt;We are now going to configure our distribution so that it catches errors and redirects them invisibly for the user. To do this, Cloudfront offers an "Error Pages" section where we can configure custom error response :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zsjfn8c3w563lt8k9mm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2zsjfn8c3w563lt8k9mm.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's create a custom error response :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flv26s68q7szad50uxhot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flv26s68q7szad50uxhot.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By doing that, we are going to catch all 502 HTTP errors and show our pretty HTML error page by setting "Response page path" to &lt;em&gt;/cloudfront-catched-errors-demo/index.html&lt;/em&gt;. Let me show you!&lt;/p&gt;

&lt;p&gt;Before :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9tjpvbv615afrc30akoe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9tjpvbv615afrc30akoe.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75bkv47abgxy1g0pnp0b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F75bkv47abgxy1g0pnp0b.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  To go further
&lt;/h2&gt;

&lt;p&gt;I recommend that you catch at least HTTP 500, 501, 502, 503 and 504 errors via Cloudfront error pages because they concern server-side errors and are not managed by the application server. For 4xx errors, as they concern errors made by the customer, this is more debatable. If the error is handled correctly by the application, you might as well leave this role to it (this is very often the case for 404 errors, which are handled by web mode frameworks).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;If you liked this post, you can find more on my blog &lt;a href="https://adrien-mornet.tech/" rel="noopener noreferrer"&gt;https://adrien-mornet.tech/&lt;/a&gt; 🚀&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudfront</category>
      <category>cloud</category>
      <category>cdn</category>
    </item>
    <item>
      <title>Running a Web Application with 100% AWS Fargate Spot Containers 🤘</title>
      <dc:creator>Adrien Mornet</dc:creator>
      <pubDate>Sun, 15 Oct 2023 13:54:20 +0000</pubDate>
      <link>https://dev.to/aws-builders/running-a-web-application-with-100-aws-fargate-spot-containers-5876</link>
      <guid>https://dev.to/aws-builders/running-a-web-application-with-100-aws-fargate-spot-containers-5876</guid>
      <description>&lt;p&gt;One of the major advantages of using the Cloud is its Pay-Per-Use model. To make the most of this model, the challenge is to find the computing capacity at the best price that matches your workload. I’m going to explain in this article how I could run a web application in 100% Fargate Spot containers.&lt;/p&gt;

&lt;h2&gt;
  
  
  EC2 vs Fargate vs Fargate Spot
&lt;/h2&gt;

&lt;p&gt;AWS offers several compute capacity options for running containers orchestrated by ECS :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 : the good old on-demand compute service for servers on which you can install the ECS container agent&lt;/li&gt;
&lt;li&gt;Fargate : a serverless compute engine that lets you run ECS containers without having to manage EC2 servers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both of EC2 and Fargate offer spot deals. I have chosen AWS Fargate over EC2 for running containers to achieve cost savings and operational simplicity, as Fargate eliminates the need to manage EC2 instances and offers significant pricing advantages with Fargate Spot.&lt;/p&gt;

&lt;h2&gt;
  
  
  Having a fault tolerant application
&lt;/h2&gt;

&lt;p&gt;The most important thing to manage when using spot capacity is that your ECS task can be interrupted by AWS at any time. In concrete terms, a SIGTERM system signal is sent to your containers in your ECS task and they have 120 seconds to stop before being hard-killed. Your application must therefore be able to shut down properly before these 120 seconds. The overwhelming majority of web applications take a few seconds to respond, don’t they? So the 120 seconds are not a block. If you configure your ECS services to run at least a count of 2 desired task, this won’t be a problem if only one task is running for a few seconds while the orchestrator starts a second task.&lt;/p&gt;

&lt;p&gt;I explain in a little more detail in another article &lt;a href="https://adrien-mornet.tech/handling-graceful-shutdown-of-your-docker-cron-containers/"&gt;“Handling graceful shutdown of your Docker Cron containers”&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manage unavailability of Fargate Spot capacity
&lt;/h2&gt;

&lt;p&gt;As written in the AWS documentation, during periods of extremely high demand, Fargate Spot capacity might be unavailable. In concrete terms, if your ECS service is set up to execute tasks in 100% Spot, there is a risk of running out of capacity. A &lt;a href="https://www.npmjs.com/package/@wheatstalk/fargate-spot-fallback"&gt;workaround&lt;/a&gt; has been created in the hope that one day this &lt;a href="https://github.com/aws/containers-roadmap/issues/852"&gt;issue&lt;/a&gt; will be implemented by the AWS team. This workaround allows you to set up two ECS services :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“a primary with only spot capacity and a fallback with only on demand capacity. When the primary emits a task placement error a lambda sets the desired count on the fallback service. When the primary emits a steady state event a lambda sets the fallback desired count to zero.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  My experience with Fargate Spot
&lt;/h2&gt;

&lt;p&gt;In my experience, I was able to run 4 microservices in production with 100% Fargate Spot capacity. The microservice responsible for the frontend supported around 2000 different users per day. At no point during one year we have run out of Fargate Spot capacity. When one task received an interrupt signal, another started within a few seconds, with virtually no effect on response times. The savings we have been able to make thanks to Fargate Spot are considerable and it was a really good choice.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;If you liked this post, you can find more on my blog &lt;a href="https://adrien-mornet.tech/"&gt;https://adrien-mornet.tech/&lt;/a&gt; 🚀&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>fargate</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Authenticating your GitLab CI runner to an AWS ECR registry using Amazon ECR Docker Credential Helper 🔑</title>
      <dc:creator>Adrien Mornet</dc:creator>
      <pubDate>Fri, 23 Jun 2023 12:17:07 +0000</pubDate>
      <link>https://dev.to/aws-builders/authenticating-your-gitlab-ci-runner-to-an-aws-ecr-registry-using-amazon-ecr-docker-credential-helper-3ba</link>
      <guid>https://dev.to/aws-builders/authenticating-your-gitlab-ci-runner-to-an-aws-ecr-registry-using-amazon-ecr-docker-credential-helper-3ba</guid>
      <description>&lt;p&gt;GitLab CI allows you to run your CI/CD jobs in separate and isolated Docker containers. For maximum flexibility, you may need to run your jobs from a self-created Docker image tailored to your project’s specific needs. You can store this self-created and private Docker image in an AWS ECR registry. In this tutorial I will explain how to set up automatic authentication from your GitLab runner to your registry with Amazon ECR Docker Credential Helper.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitLab CI job
&lt;/h2&gt;

&lt;p&gt;Create a GitLab CI job which uses your Docker image saved in a private AWS ECR registry :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;phpunit:
  stage: testing
  image: 
    name: 123456789123.dkr.ecr.us-east-1.amazonaws.com/php-gitlabrunner:latest
    entrypoint: [""]
  script:
    - php ./vendor/bin/phpunit --coverage-text --colors=never
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Create and configure your runner to access AWS ECR registry
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Create an EC2 instance and &lt;a href="https://docs.gitlab.com/runner/install/" rel="noopener noreferrer"&gt;install GitLab Runner&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.gitlab.com/runner/register/" rel="noopener noreferrer"&gt;Register your runner with Docker executor&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Install the &lt;a href="https://github.com/awslabs/amazon-ecr-credential-helper" rel="noopener noreferrer"&gt;AWS ECR Docker Credential Helper&lt;/a&gt; in your runner&lt;/li&gt;
&lt;li&gt;Create a /root/.docker/config.json file and add :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"credsStore"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ecr-login"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create an IAM User with CLI access and attach &lt;code&gt;arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly&lt;/code&gt; policy&lt;/li&gt;
&lt;li&gt;Paste CLI credentials to &lt;code&gt;/home/gitlab-runner/.aws/credentials&lt;/code&gt; file on your GitLab runner :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[default]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR SECRET KEY
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Configure AWS Region in /root/.aws/config :
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[default]
region = YOUR REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Edit your &lt;code&gt;/etc/gitlab-runner/config.toml&lt;/code&gt; to add in &lt;a href="https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section" rel="noopener noreferrer"&gt;the [[runners]] section&lt;/a&gt; the following line &lt;code&gt;environment = ["DOCKER_AUTH_CONFIG={ \"credsStore\": \"ecr-login\" }"]&lt;/code&gt;:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[[runners]]
  name = "gitlab-runner"
  url = "https://gitlab.com/"
  executor = "docker"
  [runners.custom_build_dir]
  [runners.cache]
    MaxUploadedArchiveSize = 0
    [runners.cache.s3]
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    image = "php:8-cli"
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock", "/builds:/builds"]
    shm_size = 0
    environment = ["DOCKER_AUTH_CONFIG={ \"credsStore\": \"ecr-login\" }"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now your GitLab runner can automatically authenticate to your ECR registry 🙂&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;If you liked this post, you can find more on my blog &lt;a href="https://adrien-mornet.tech/" rel="noopener noreferrer"&gt;https://adrien-mornet.tech/&lt;/a&gt; 🚀&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>gitlab</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Best Practices to protect an RDS MySQL Database ✅</title>
      <dc:creator>Adrien Mornet</dc:creator>
      <pubDate>Mon, 17 Apr 2023 11:32:10 +0000</pubDate>
      <link>https://dev.to/aws-builders/best-practices-to-protect-an-rds-mysql-database-2f95</link>
      <guid>https://dev.to/aws-builders/best-practices-to-protect-an-rds-mysql-database-2f95</guid>
      <description>&lt;p&gt;Amazon RDS is a very popular choice for creating MySQL databases in the cloud. Many modern companies use it to store their business data. However, as with any other database, securing these databases requires special attention to protect against potential threats and vulnerabilities.&lt;/p&gt;

&lt;p&gt;In this article, we will explore 10 best practices for securing your AWS RDS MySQL database instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Use strong passwords and rotate them
&lt;/h2&gt;

&lt;p&gt;The use of strong passwords is an essential measure to protect your database instance. A strong password consists of a random combination of upper and lower case letters, numbers and special characters. It should be at least 20 characters long.&lt;/p&gt;

&lt;p&gt;It is also important to change your password regularly. Passwords should be changed at regular intervals, for example every 60 to 90 days.&lt;/p&gt;

&lt;p&gt;Use the MySQL &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Concepts.PasswordValidationPlugin.html" rel="noopener noreferrer"&gt;validate_password&lt;/a&gt; plugin, which provides additional security like password expiration, password reuse restrictions or password failure tracking.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Keep MySQL Updated
&lt;/h2&gt;

&lt;p&gt;One of the major advantages of the managed services offered by AWS is that you don’t have to manage them and can activate automatic updates! It would be silly to do without this… Granted, it doesn’t protect against 0-day flaws, but it’s already very good to protect against known (and patched!) flaws.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgj7sncu9hxakupuu70i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsgj7sncu9hxakupuu70i.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Change default port
&lt;/h2&gt;

&lt;p&gt;This good practice protects against bots that regularly scan the internet for instances that are not sufficiently protected. It is very basic but it would be a shame to do without it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkznfhr6wzn57hoduqckc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkznfhr6wzn57hoduqckc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Use different users with different permissions
&lt;/h2&gt;

&lt;p&gt;A user should be created for each use. For example, if we have a script that makes backups of our application, we will not use the same user as the application. A read-only user is sufficient here. The root user should only be used to administer the instance, not to use it!&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Disconnect your RDS instance from the internet
&lt;/h2&gt;

&lt;p&gt;If you have the luxury of being able to disconnect your instance from the internet and make it accessible only from within its VPC network, do it!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn5quz5711pq987fewgt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn5quz5711pq987fewgt.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Use a Firewall
&lt;/h2&gt;

&lt;p&gt;AWS Security Group can be used to allow only certain IP addresses to connect to your instance’s port :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bawwe5v50jv7woe867a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0bawwe5v50jv7woe867a.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Use MySQL engine host check
&lt;/h2&gt;

&lt;p&gt;Instead of creating your MySQL users like this&lt;code&gt;CREATE USER 'new_user'@'%' IDENTIFIED BY 'password‘;&lt;/code&gt; , use MySQL engine host check :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="o"&gt;#&lt;/span&gt; &lt;span class="nb"&gt;Fixed&lt;/span&gt; &lt;span class="n"&gt;IP&lt;/span&gt; &lt;span class="n"&gt;address&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;USER&lt;/span&gt; &lt;span class="s1"&gt;'new_user'&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'123.123.123.123/255.255.255.255'&lt;/span&gt; &lt;span class="n"&gt;IDENTIFIED&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="s1"&gt;'password'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="o"&gt;#&lt;/span&gt; &lt;span class="n"&gt;Private&lt;/span&gt; &lt;span class="n"&gt;IP&lt;/span&gt; &lt;span class="k"&gt;range&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="n"&gt;your&lt;/span&gt; &lt;span class="n"&gt;subnet&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;USER&lt;/span&gt; &lt;span class="s1"&gt;'new_user'&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'192.168.0.0/255.255.255.0'&lt;/span&gt; &lt;span class="n"&gt;IDENTIFIED&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="s1"&gt;'password'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  8. Use SSL
&lt;/h2&gt;

&lt;p&gt;Implement encryption in transit using SSL :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="k"&gt;USER&lt;/span&gt; &lt;span class="s1"&gt;'new_user'&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'123.123.123.123/255.255.255.255'&lt;/span&gt; &lt;span class="n"&gt;REQUIRE&lt;/span&gt; &lt;span class="n"&gt;SSL&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  9. Grant desired privileges to desired databases
&lt;/h2&gt;

&lt;p&gt;Instead of granting your MySQL user privileges like this &lt;code&gt;GRANT ALL PRIVILEGES ON * . * TO 'new_user'@'123.123.123.123/255.255.255.255';&lt;/code&gt; grant desired privileges to desired databases :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;GRANT&lt;/span&gt; &lt;span class="k"&gt;SELECT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;INSERT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;UPDATE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;DELETE&lt;/span&gt; &lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;my_db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;TO&lt;/span&gt; &lt;span class="s1"&gt;'some_user'&lt;/span&gt;&lt;span class="o"&gt;@&lt;/span&gt;&lt;span class="s1"&gt;'123.123.123.123/255.255.255.255'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  10. Use 2FA or 3FA
&lt;/h2&gt;

&lt;p&gt;MySQL 8.0.27 and higher supports multifactor authentication (MFA), such that accounts can have up to three authentication methods. IAM authentication can also enable multi-factor authentication.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;If you liked this post, you can find more on my blog &lt;a href="https://adrien-mornet.tech/" rel="noopener noreferrer"&gt;https://adrien-mornet.tech/&lt;/a&gt; 🚀&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>mysql</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>RDS MySQL Load Testing with Sysbench</title>
      <dc:creator>Adrien Mornet</dc:creator>
      <pubDate>Sun, 09 Apr 2023 22:35:23 +0000</pubDate>
      <link>https://dev.to/aws-builders/rds-mysql-load-testing-with-sysbench-3i26</link>
      <guid>https://dev.to/aws-builders/rds-mysql-load-testing-with-sysbench-3i26</guid>
      <description>&lt;p&gt;Sometimes you need to do a load test on MySQL Database to test Auto-Scaling for example. I found a very useful tool called &lt;a href="https://github.com/akopytov/sysbench"&gt;Sysbench&lt;/a&gt; that I will present in this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install Sysbench
&lt;/h2&gt;

&lt;p&gt;Open your terminal and run :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; https://packagecloud.io/install/repositories/akopytov/sysbench/script.deb.sh | &lt;span class="nb"&gt;sudo &lt;/span&gt;bash

&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="nb"&gt;install &lt;/span&gt;sysbench
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Prepare for Load Testing
&lt;/h2&gt;

&lt;p&gt;Create a “test” database and run sysbench prepare command :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;mysql &lt;span class="nt"&gt;-h&lt;/span&gt; YOUR_MYSQL_HOST &lt;span class="nt"&gt;-u&lt;/span&gt; YOUR_MYSQL_USER &lt;span class="nt"&gt;-pYOUR_MYSQL_PASSWORD&lt;/span&gt; &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s1"&gt;'CREATE DATABASE test;'&lt;/span&gt;

&lt;span class="nb"&gt;sudo &lt;/span&gt;sysbench /usr/share/sysbench/oltp_read_only.lua &lt;span class="nt"&gt;--db-driver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mysql &lt;span class="nt"&gt;--mysql-db&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;--mysql-user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;YOUR_MYSQL_USER &lt;span class="nt"&gt;--mysql-password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;YOUR_MYSQL_PASSWORD  &lt;span class="nt"&gt;--mysql-host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;YOUR_MYSQL_HOST &lt;span class="nt"&gt;--threads&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 prepare
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nth7YJx_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b4le2qr1qggpy3btbfq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nth7YJx_--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b4le2qr1qggpy3btbfq5.png" alt="Image description" width="800" height="615"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Run MySQL Load Testing
&lt;/h2&gt;

&lt;p&gt;Let’s load test a fresh created database with only 1 instance (Aurora Auto-Scaling is configured on the test-load-database cluster) :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rgxjy8P1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yf8iwgh4gpwpwdcg3irn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rgxjy8P1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yf8iwgh4gpwpwdcg3irn.png" alt="Image description" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the CPU of the instance :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--eZpITcVl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d3fan0u5ya2ij7ryglag.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--eZpITcVl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d3fan0u5ya2ij7ryglag.png" alt="Image description" width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a bash script &lt;code&gt;load_test_mysql.sh&lt;/code&gt; and paste inside :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;((&lt;/span&gt;&lt;span class="nv"&gt;n&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;0&lt;span class="p"&gt;;&lt;/span&gt;n&amp;lt;100&lt;span class="p"&gt;;&lt;/span&gt;n++&lt;span class="o"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;do
 &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;date&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
 sysbench /usr/share/sysbench/oltp_read_only.lua &lt;span class="nt"&gt;--db-driver&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;mysql &lt;span class="nt"&gt;--mysql-db&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;test&lt;/span&gt; &lt;span class="nt"&gt;--mysql-user&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;YOUR_MYSQL_USER &lt;span class="nt"&gt;--mysql-password&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;YOUR_MYSQL_PASSWORD  &lt;span class="nt"&gt;--mysql-host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;YOUR_MYSQL_HOST &lt;span class="nt"&gt;--threads&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;80 run
 &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$n&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And run the test :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;chmod&lt;/span&gt; +x load_test_mysql.sh
./load_test_mysql.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here come the magic :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--hZfynlnn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m8x1mvebp76k1fl1uken.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--hZfynlnn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m8x1mvebp76k1fl1uken.png" alt="Image description" width="800" height="831"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And you can see your CPU increase :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--mKdUjdHj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hyt5w3hh20w6pt633952.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--mKdUjdHj--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hyt5w3hh20w6pt633952.png" alt="Image description" width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Finally I see that my autoscaling works well as I have two new instances :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vv94CZyl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/21fkdah61j6dt047tz9x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vv94CZyl--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/21fkdah61j6dt047tz9x.png" alt="Image description" width="800" height="227"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks to &lt;a href="https://github.com/akopytov/sysbench"&gt;https://github.com/akopytov/sysbench&lt;/a&gt; 😁&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you liked this post, you can find more on my blog &lt;a href="https://adrien-mornet.tech/"&gt;https://adrien-mornet.tech/&lt;/a&gt; 🚀&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>mysql</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>How to Run a Shell on ECS Fargate Containers 💻</title>
      <dc:creator>Adrien Mornet</dc:creator>
      <pubDate>Mon, 20 Mar 2023 16:05:20 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-run-a-shell-on-ecs-fargate-containers-eo1</link>
      <guid>https://dev.to/aws-builders/how-to-run-a-shell-on-ecs-fargate-containers-eo1</guid>
      <description>&lt;p&gt;If you need to troubleshoot or debug your ECS Fargate containers, you may want to open a terminal on them. There are two options available to open a shell on an ECS container: with SSH or using the ECS CLI, a command-line tool provided by AWS. The first option may create potential drawbacks and security concerns: opening SSH port an managing private and public SSH keys. The second option doesn’t require you to enable SSH access or open any additional ports because it relies on IAM authentication and AWS Session Manager.&lt;/p&gt;

&lt;p&gt;In my opinion, using the ECS CLI to access a terminal on ECS Fargate is generally more secure than enabling SSH access because the ECS CLI doesn’t require opening any additional ports or enabling direct access to your ECS containers, which can reduce the potential risk for security vulnerabilities.&lt;/p&gt;

&lt;p&gt;In this article I will explain how to open a shell on an ECS container via the AWS CLI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install AWS CLI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;Install AWS CLI depending on the architecture of your computer&lt;/a&gt;. For Linux x86 :&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

curl &lt;span class="s2"&gt;"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"awscliv2.zip"&lt;/span&gt;
unzip awscliv2.zip
&lt;span class="nb"&gt;sudo&lt;/span&gt; ./aws/install


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Install Session Manager Plugin
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;Install the Session Manager plugin for the AWS CLI&lt;/a&gt;. For Linux x86 :&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

curl &lt;span class="s2"&gt;"https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm"&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; &lt;span class="s2"&gt;"session-manager-plugin.rpm"&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; session-manager-plugin.rpm


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Attach the necessary IAM policy
&lt;/h2&gt;

&lt;p&gt;Create an IAM policy &lt;code&gt;ECSFargateAllowExecuteCommand&lt;/code&gt; and attach it to your ECS Task execution role :&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ssmmessages:CreateControlChannel"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ssmmessages:CreateDataChannel"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ssmmessages:OpenControlChannel"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"ssmmessages:OpenDataChannel"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Open a Shell
&lt;/h2&gt;

&lt;p&gt;AWS CLI command &lt;code&gt;ecs execute-command&lt;/code&gt; requires 3 arguments :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The ECS cluster name&lt;/li&gt;
&lt;li&gt;The ECS task id&lt;/li&gt;
&lt;li&gt;The container name&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Open your ECS task on the ECS Console and retrieve the following information :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxpb8is8msdon9mkhslj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxpb8is8msdon9mkhslj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Use the information retrieved for the ECS CLI command :&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;

aws ecs execute-command &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--region&lt;/span&gt; us-east-1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster&lt;/span&gt; ECS_CLUSTER_NAME &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--task&lt;/span&gt; ECS_TASK_ID &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--container&lt;/span&gt; CONTAINER_NAME &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--command&lt;/span&gt; &lt;span class="s2"&gt;"/bin/bash"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--interactive&lt;/span&gt;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17j6m9f7shq0hzmhgxlk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F17j6m9f7shq0hzmhgxlk.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you liked this post, you can find more on my blog &lt;a href="https://adrien-mornet.tech/" rel="noopener noreferrer"&gt;https://adrien-mornet.tech/&lt;/a&gt; 🚀&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>devops</category>
      <category>cloud</category>
      <category>aws</category>
    </item>
    <item>
      <title>How to protect a website against DoS Attack using AWS WAF v2</title>
      <dc:creator>Adrien Mornet</dc:creator>
      <pubDate>Sun, 12 Mar 2023 12:11:57 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-protect-a-website-against-dos-attack-using-aws-waf-v2-fbi</link>
      <guid>https://dev.to/aws-builders/how-to-protect-a-website-against-dos-attack-using-aws-waf-v2-fbi</guid>
      <description>&lt;p&gt;Denial of Service attacks are commons in these times. In my company we receive from time to time such an attack. For example on September 26th someone did almost one million requests in 1 hour on our servers and it was blocked by our AWS Web Application Firewall (WAF v2) :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i0kro8b66hcxhxit9gz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1i0kro8b66hcxhxit9gz.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will detail in this article how did we protect us against DoS Attack using AWS WAF v2.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a DoS attack ?
&lt;/h2&gt;

&lt;p&gt;Here is the definition according to Wikipedia :&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A denial-of-service attack (DoS attack) is a cyber-attack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to a network. Denial of service is typically accomplished by flooding the targeted machine or resource with superfluous requests in an attempt to overload systems and prevent some or all legitimate requests from being fulfilled.&lt;br&gt;
&lt;a href="https://en.wikipedia.org/wiki/Denial-of-service_attack" rel="noopener noreferrer"&gt;https://en.wikipedia.org/wiki/Denial-of-service_attack&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In my example we received 835k requests in 1 hour or 14,000 per minute. We can consider that as a DoS attack&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure AWF WAF v2 to protect a website against DoS Attack
&lt;/h2&gt;

&lt;p&gt;AWF WAF v2 can be associated with various ressources like Cloudfront Distribution, Application Load Balancer, AWS AppSync, Amazon API Gateway or Amazon Cognito.&lt;/p&gt;

&lt;p&gt;Let’s open your AWS Console, go to WAF v2 and create our first Web ACL. In our case we will create a Web ACL for a Cloudfront Distribution.&lt;/p&gt;

&lt;p&gt;1) Create the Web ACL and select the Cloudfront Distribution :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvyngh4zuuk7citm55nz8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvyngh4zuuk7citm55nz8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2) Click on “Add my own rules and rule groups” :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1z1rra29w1jgf7vq5it.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq1z1rra29w1jgf7vq5it.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3) Use the Rule builder to create a rate-based rule :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hle5jdxsvizl7xtcv1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7hle5jdxsvizl7xtcv1o.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4) Configure the rule :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4mh5k6kdiluc3srqg8xw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4mh5k6kdiluc3srqg8xw.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5) Choose the action to block :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7v7cknal8g39v6der7c4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7v7cknal8g39v6der7c4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6) And add the rule. You’re done&lt;/p&gt;

&lt;p&gt;Now if someone does more than 500 requests in a five minute period, AWF WAF will block them by returning a 403 HTTP code !&lt;/p&gt;

&lt;h2&gt;
  
  
  Price
&lt;/h2&gt;

&lt;p&gt;Only $6 / month&lt;/p&gt;

&lt;p&gt;($5.00 per web ACL per month + $1.00 per rule per month)&lt;/p&gt;

&lt;h2&gt;
  
  
  Limits
&lt;/h2&gt;

&lt;p&gt;AWS WAF checks the rate of requests every 30 seconds so your website can be DoS during 29 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does AWF WAF impacts performances ?
&lt;/h2&gt;

&lt;p&gt;Definitely no. We didn’t see any change in our response times.&lt;/p&gt;

&lt;h2&gt;
  
  
  DoS vs DDoS
&lt;/h2&gt;

&lt;p&gt;There is a big difference between Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks. As distributed means different IP addresses, AWF WAF will not detect any DDoS if each IP does less than 500 requests per 5 minutes period. AWF offers another service called AWS Shield to protect you against DDoS attacks but it’s really expensive ($3000 per month).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you liked this post, you can find more on my blog &lt;a href="https://adrien-mornet.tech/" rel="noopener noreferrer"&gt;https://adrien-mornet.tech/&lt;/a&gt; 🚀&lt;/em&gt;&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>devops</category>
      <category>aws</category>
      <category>security</category>
    </item>
    <item>
      <title>Monitoring Nuxt.js app with Datadog 🔍</title>
      <dc:creator>Adrien Mornet</dc:creator>
      <pubDate>Tue, 07 Mar 2023 07:36:17 +0000</pubDate>
      <link>https://dev.to/adrienmornet/monitoring-nuxtjs-app-with-datadog-1f39</link>
      <guid>https://dev.to/adrienmornet/monitoring-nuxtjs-app-with-datadog-1f39</guid>
      <description>&lt;p&gt;During the last weeks, I have been able to deepen Datadog. If you’re using Nuxt.js to build your web application, you know how important it is to keep an eye on your app’s performance and behavior. That’s where monitoring tools like Datadog come in handy. With Datadog, you can easily monitor your Nuxt.js app and get insights into how it’s performing in real-time.&lt;/p&gt;

&lt;p&gt;In this blog post, I’ll show you how to set up Datadog to monitor your Nuxt.js app and what metrics to monitor. By the end of this article, you’ll have a better understanding of how to optimize your Nuxt.js app for performance and make sure it’s running smoothly for your users. So, let’s dive in!&lt;/p&gt;

&lt;h2&gt;
  
  
  Nuxt.js
&lt;/h2&gt;

&lt;p&gt;Nuxt.js is a popular open-source framework for building server-side rendered (SSR) applications. It is built on top of Vue.js. In a Nuxt.js application, the frontend and backend are tightly coupled. This means that the performance of your application is dependent on both the frontend and the backend working together seamlessly.&lt;/p&gt;

&lt;p&gt;On the frontend side, you need to monitor things like page load times (Core Web Vitals), browser errors, resource usage (APIs), and user interactions to ensure that your application is providing a fast and responsive user experience. If the frontend is slow, users may experience lag, and this can result in a poor user experience.&lt;/p&gt;

&lt;p&gt;On the backend side, you need to monitor things like server response times, server errors, database queries, and network requests to ensure that your application is performing well and can handle high traffic loads. If the backend is slow, it can cause delays in loading content on the frontend and impact user experience.&lt;/p&gt;

&lt;p&gt;Monitoring both the frontend and backend of your Nuxt.js application is crucial to identify potential performance bottlenecks and prevent them from affecting your users. By collecting and analyzing performance metrics from both sides, you can optimize your application for speed and reliability, ensuring a smooth user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frontend : Real User Monitoring (RUM)
&lt;/h2&gt;

&lt;p&gt;Datadog Real User Monitoring (RUM) is a tool that allows you to monitor the performance of your web application from the perspective of real users. RUM tracks user interactions, page load times, and other frontend metrics to provide you with insights into how your application is performing for your users.&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Install @Datadog/browser-rum : &lt;code&gt;npm i @datadog/browser-rum&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create plugins/datadog.client.js file and initialize it :&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;datadogRum&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@datadog/browser-rum&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="na"&gt;$config&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ddApplicationId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ddApplicationToken&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ddVersion&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ddEnv&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;datadogRum&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;applicationId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ddApplicationId&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;clientjeToken&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ddApplicationToken&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;site&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;datadoghq.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;nuxt-front&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ddVersion&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ddEnv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;sessionSampleRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;sessionReplaySampleRate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;trackUserInteractions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;trackResources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;trackLongTasks&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;defaultPrivacyLevel&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mask-user-input&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;datadogRum&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startSessionReplayRecording&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Import the plugin in nuxt.config.js :
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dotenv&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;config&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;buildModules&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@nuxtjs/dotenv&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;[...]&lt;/span&gt;
    &lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;~/plugins/datadog.client.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Pass environment variables in nuxt.config.js :
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dotenv&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;config&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;buildModules&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@nuxtjs/dotenv&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;[...]&lt;/span&gt;
    &lt;span class="na"&gt;plugins&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;~/plugins/datadog.client.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="p"&gt;[...]&lt;/span&gt;
    &lt;span class="na"&gt;publicRuntimeConfig&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;ddApplicationId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DD_APPLICATION_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;ddApplicationToken&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DD_CLIENT_TOKEN&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;ddVersion&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DD_VERSION&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;ddEnv&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DD_ENV&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In Datadog, navigate to the RUM Applications page and click the New Application button. Enter a name for your application and click Generate Client Token. This generates a clientToken and an applicationId for your application. Choose the installation type for the RUM Browser SDK with npm.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create / add to your .env file Datadog configuration :&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;DD_APPLICATION_ID&lt;/span&gt;&lt;span class="o"&gt;=&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;DATADOG_APPLICATION_ID&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="nx"&gt;DD_CLIENT_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;DATADOG_CLIENT_TOKEN&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
&lt;span class="nx"&gt;DD_VERSION&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;1.0.0&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="nx"&gt;DD_ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;production&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Monitor your customers’ browsers
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Track errors
&lt;/h3&gt;

&lt;p&gt;Monitor the ongoing bugs and issues and track them over time and versions :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g7xcdr0cnf60pohvk0w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g7xcdr0cnf60pohvk0w.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  User analytics
&lt;/h3&gt;

&lt;p&gt;Understand who is using your application (country, device, OS), monitor individual users journeys, and analyze how users interact with your application (most common page visited, clicks, interactions, and feature usage) :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69u02fr81cl9z5pwegd0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69u02fr81cl9z5pwegd0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Performances
&lt;/h3&gt;

&lt;p&gt;Track the performance of your front-end Nuxt.js code with key metrics like Core Web Vitals :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmpnu6ljezo2kn0mugs7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnmpnu6ljezo2kn0mugs7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Ressource usage
&lt;/h3&gt;

&lt;p&gt;Visualize which and how resources are being loaded :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2qx7dbnyyknamuc1ug4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2qx7dbnyyknamuc1ug4.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Sessions replay
&lt;/h3&gt;

&lt;p&gt;Record and replay individual user sessions to gain deeper insights into user behavior and identify any issues or errors they may have encountered during their session. With Session Replay, you can visualize user interactions and see exactly how they navigated through your application, providing valuable information for troubleshooting and optimization purposes :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm189c1mrerxd70bdbgqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm189c1mrerxd70bdbgqa.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Backend : Application Performance Monitoring (APM)
&lt;/h2&gt;

&lt;p&gt;Datadog Application Performance Monitoring (APM) is a tool that allows you to monitor the performance of your web application from the backend perspective. With Datadog APM, you can track metrics such as response time, error rate, and throughput, as well as identify the root cause of performance issues in your application. The tool also provides detailed tracing of individual requests, allowing you to see how each component of your application is performing and how requests are flowing through it&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Install and configure the Datadog Agent depending of your environment : &lt;a href="https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/nodejs/?tab=containers#configure-the-datadog-agent-for-apm" rel="noopener noreferrer"&gt;https://docs.datadoghq.com/tracing/trace_collection/dd_libraries/nodejs/?tab=containers#configure-the-datadog-agent-for-apm&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install the Datadog Tracing library : &lt;code&gt;npm install dd-trace --save&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run your Nuxt.js app loading and initializing the tracing library : &lt;code&gt;node --require dd-trace/init /app/node_modules/nuxt/bin/nuxt.js start&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Monitor your Nuxt.js server
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Trace your application requests
&lt;/h3&gt;

&lt;p&gt;Traces are detailed records of requests that flow through your application :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1blo16rruw1drh4o1d0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff1blo16rruw1drh4o1d0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Track errors
&lt;/h3&gt;

&lt;p&gt;Keep a close eye on any existing bugs and issues, and maintain a record of their progress across various versions and over time :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgk7scu46wea23kcfoeg8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgk7scu46wea23kcfoeg8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Database
&lt;/h3&gt;

&lt;p&gt;Monitor databases by collecting and analyzing performance metrics such as reads, writes, and deletes, as well as memory usage, network latency, and other key performance indicators :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft93z6yz0g0k8gjiu2s0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft93z6yz0g0k8gjiu2s0n.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, monitoring your Nuxt.js application with Datadog is important to ensure optimal performance and a smooth user experience. By using Datadog’s Real User Monitoring (RUM) and Application Performance Monitoring (APM), you can gain insights into how your application is performing on both the frontend and backend, and quickly identify and resolve any issues.&lt;/p&gt;

&lt;p&gt;Together, RUM and APM provide a comprehensive view of your Nuxt.js application’s performance, making it easier to detect and resolve issues, optimize your code, and improve the overall user experience. Datadog’s centralized platform for monitoring RUM and APM data makes it easy to correlate frontend and backend performance metrics and gain deeper insights into how your application is behaving.&lt;/p&gt;

&lt;p&gt;So, if you’re looking to build web applications with Nuxt.js, monitoring with Datadog is an excellent choice. Try it out and see how it can help you improve your application’s performance!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you liked this post, you can find more on my blog &lt;a href="https://adrien-mornet.tech/" rel="noopener noreferrer"&gt;https://adrien-mornet.tech/&lt;/a&gt; 🚀&lt;/em&gt;&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>tutorial</category>
      <category>devops</category>
      <category>sre</category>
    </item>
    <item>
      <title>Replicate AWS RDS Aurora to AWS RDS MySQL</title>
      <dc:creator>Adrien Mornet</dc:creator>
      <pubDate>Sat, 04 Mar 2023 09:31:28 +0000</pubDate>
      <link>https://dev.to/aws-builders/replicate-aws-rds-aurora-to-aws-rds-mysql-2l9c</link>
      <guid>https://dev.to/aws-builders/replicate-aws-rds-aurora-to-aws-rds-mysql-2l9c</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtx5mj4b0q5fklxio4o0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmtx5mj4b0q5fklxio4o0.png" alt=" " width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS encourages you to migrate from RDS MySQL to RDS Aurora by offering convenient features such as the ability to create an Aurora Read Replica from an RDS MySQL cluster and then promote it to Master. An almost downtime-free migration ! But how to do the same thing in the other way ? How to replicate an RDS Aurora Cluster to RDS MySQL or external MySQL Server ? Unfortunately AWS didn’t offer the ability to create a RDS MySQL Read Replica from an Aurora RDS cluster. I will detail in this post how to achieve it.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Create an RDS MySQL instance to be a Read Replica of an RDS Aurora Cluster
&lt;/h2&gt;

&lt;p&gt;As mentioned in AWS documentation, to configure a MySQL DB instance to be a read replica of a MySQL instance you need to enable autocommit. Let’s create an RDS Parameter group.&lt;/p&gt;

&lt;p&gt;Open RDS Console -&amp;gt; parameter group and click on Create parameter group button :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffccack663w74iy601pj7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffccack663w74iy601pj7.png" alt=" " width="800" height="236"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvx7bslsn1y2akmye19nu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvx7bslsn1y2akmye19nu.png" alt=" " width="800" height="576"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the parameter group is created, edit it and set &lt;code&gt;autocommit = 1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Now you can create an RDS MySQL instance using this parameter group !&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Modify you RDS Aurora cluster to enable Binary Logs
&lt;/h2&gt;

&lt;p&gt;MySQL uses Binary Logs to save the database state : “The binary log contains “events” that describe database changes such as table creation operations or changes to table data.”&lt;/p&gt;

&lt;p&gt;By default Binary Logs are not enabled on Aurora and we need to activate them. Let’s create an RDS Aurora DB cluster parameter group and enable Binary logs.&lt;/p&gt;

&lt;p&gt;Create an RDS Aurora DB cluster parameter group, set &lt;code&gt;binlog_format = MIXED&lt;/code&gt;, affect this new parameter group to your Aurora Cluster and restart your cluster :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1fho55ct1ezfg0sn78m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1fho55ct1ezfg0sn78m.png" alt=" " width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Configure RDS Aurora Cluster to be ready for replication
&lt;/h2&gt;

&lt;p&gt;Create a MySQL user on your RDS Aurora Cluster allowed to replicate :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE USER 'repl_user'@'mydomain.com' IDENTIFIED WITH mysql_native_password BY 'password';
GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'mydomain.com';
FLUSH PRIVILEGES;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Get binlog file and current position :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SHOW MASTER STATUS;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkm6il86xm69ysf0xkulq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkm6il86xm69ysf0xkulq.png" alt=" " width="800" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will use the value of “File” and “Position” columns in the next steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Configure RDS MySQL instance to replicate RDS Aurora Cluster
&lt;/h2&gt;

&lt;p&gt;Connect to RDS MySQL instance and run the following command with your custom File and Position values :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CALL mysql.rds_set_external_master (
    'aurora_host'
  , 'aurora_port'
  , 'repl_user'
  , 'password'
  , 'mysql-bin-changelog.036743'
  , 9187
  , 0
);
CALL mysql.rds_start_replication;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’re done, your RDS MySQL is replicating your RDS Aurora Cluster&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;If you liked this post, you can find more on my blog &lt;a href="https://adrien-mornet.tech/" rel="noopener noreferrer"&gt;https://adrien-mornet.tech/&lt;/a&gt;&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;References :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.mysql.com/doc/refman/8.0/en/binary-log.html" rel="noopener noreferrer"&gt;https://dev.mysql.com/doc/refman/8.0/en/binary-log.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.RDSMySQL.Replica.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.RDSMySQL.Replica.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql_rds_set_external_master.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql_rds_set_external_master.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>welcome</category>
      <category>mentorship</category>
      <category>career</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
