<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Airat Yusuff</title>
    <description>The latest articles on DEV Community by Airat Yusuff (@khairahscorner).</description>
    <link>https://dev.to/khairahscorner</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/khairahscorner"/>
    <language>en</language>
    <item>
      <title>Monitoring Java APIs with Amazon CloudWatch and AWS X-Ray</title>
      <dc:creator>Airat Yusuff</dc:creator>
      <pubDate>Thu, 14 Aug 2025 20:05:26 +0000</pubDate>
      <link>https://dev.to/aws-builders/monitoring-java-apis-on-ec2-with-amazon-cloudwatch-and-aws-x-ray-110a</link>
      <guid>https://dev.to/aws-builders/monitoring-java-apis-on-ec2-with-amazon-cloudwatch-and-aws-x-ray-110a</guid>
      <description>&lt;p&gt;I recently started a &lt;a href="https://platform.qa.com/programs/a559128e-6114-4384-811f-1ff91b0e1c0f/" rel="noopener noreferrer"&gt;“Smart Skill” training on Cloud Academy&lt;/a&gt; to learn more about monitoring and observability, as these are some essentials for anyone working within DevOps/Platform engineering. Having used platforms like Splunk for incident investigations, I wanted to learn more about AWS service offerings for observability (and that led me to the &lt;a href="https://platform.qa.com/lab/tracing-java-applications-with-aws-x-ray/?program=a559128e-6114-4384-811f-1ff91b0e1c0f" rel="noopener noreferrer"&gt;lab on AWS X-Ray&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In this post,&lt;/strong&gt; I’ll be sharing an end-to-end process of how to configure a Java API deployed on an EC2 instance, monitor logs using Amazon CloudWatch and traces with AWS X-Ray.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Although the lab provided a decent starting baseline, I reconfigured it, so they can be considered two different projects and you can try working on both.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;p&gt;A Java API with &lt;code&gt;/post&lt;/code&gt; and &lt;code&gt;/get&lt;/code&gt; endpoints to save 4 rows of data to a DynamoDB table, and fetch each row using its ID; deployed on an EC2 instance and configured to use CloudWatch agent and X-Ray daemon for logging and tracing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93wyn9qi4cew3jb5adtr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93wyn9qi4cew3jb5adtr.png" alt="AWS Architecture diagram" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open JDK 11&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Maven&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS X-Ray Daemon&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon CloudWatch agent&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;EC2&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IAM&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pre-Requisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Familiarity with Java and/or building Java APIs (useful for debugging)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;An existing DynamoDB table for the API to store/retrieve data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Familiarise with app code: &lt;a href="https://github.com/khairahscorner/xray-app" rel="noopener noreferrer"&gt;GitHub repo&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The project can be setup using some automation with Terraform and Ansible, but this will be the manual deployment/configuration documentation because I wanted to work through each step and explore along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  1) AWS Setup
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Minimal EC2 Settings&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Name: xray-api
AMI: Ubuntu latest version
Instance type: t3 medium
Create your Key pair (or use an existing one, if you have one)
Security group: allow inbound traffic on SSH (port 22) and custom TCP for Tomcat (port 8080)
Defaults for everything else
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An IAM role is also needed for the EC2 instance to have access to perform tasks relating to DynamoDB, X-Ray and CloudWatch. Ensure to attach the IAM role to the instance, so that it can assume the role.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Trusted&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Entity&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;-&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;allows&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;EC&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;perform&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;actions&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;defined&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;policy&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"Service"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ec2.amazonaws.com"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sts:AssumeRole"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Permissions&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Policy&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DynamoDBLimitedAccess"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"dynamodb:PutItem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"dynamodb:DeleteItem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"dynamodb:GetItem"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"dynamodb:Scan"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"dynamodb:Query"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"dynamodb:UpdateItem"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;&amp;lt;ARN for your DynamoDB table&amp;gt;&amp;gt;"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"PermissionForXRayDaemon"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"xray:PutTraceSegments"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"xray:PutTelemetryRecords"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"logs:CreateLogGroup"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"logs:CreateLogStream"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"logs:PutLogEvents"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once your instance is all setup and ready, SSH into the instance with your key pair.&lt;/p&gt;

&lt;h2&gt;
  
  
  2) Configure Instance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Install Java and configure paths
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://clouda-labs-assets.s3-us-west-2.amazonaws.com/aws-xray/openjdk-11%2B28_linux-x64_bin.tar.gz &lt;span class="nt"&gt;-O&lt;/span&gt; openjdk-11+28_linux-x64_bin.tar.gz
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xvf&lt;/span&gt; openjdk-11+28_linux-x64_bin.tar.gz
&lt;span class="nb"&gt;sudo mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; /usr/lib/jvm
&lt;span class="nb"&gt;sudo mv &lt;/span&gt;jdk-11&lt;span class="k"&gt;*&lt;/span&gt; /usr/lib/jvm/java-11-openjdk     


&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;JAVA_HOME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/usr/lib/jvm/java-11-openjdk
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PATH&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;$JAVA_HOME&lt;/span&gt;/bin:&lt;span class="nv"&gt;$PATH&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'export JAVA_HOME=/usr/lib/jvm/java-11-openjdk'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.bash_profile
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'export PATH=$JAVA_HOME/bin:$PATH:$HOME/.local/bin:$HOME/bin'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.bash_profile
&lt;span class="nb"&gt;source&lt;/span&gt; ~/.bash_profile

&lt;span class="c"&gt;# check jdk version is 11&lt;/span&gt;
java &lt;span class="nt"&gt;-version&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install Maven
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://archive.apache.org/dist/maven/maven-3/3.9.6/binaries/apache-maven-3.9.6-bin.tar.gz
&lt;span class="nb"&gt;tar&lt;/span&gt; &lt;span class="nt"&gt;-xzf&lt;/span&gt; apache-maven-3.9.6-bin.tar.gz
&lt;span class="nb"&gt;sudo mv &lt;/span&gt;apache-maven-3.9.6 /opt/maven
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'export MAVEN_HOME=/opt/maven'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.bash_profile
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s1"&gt;'export PATH=$MAVEN_HOME/bin:$PATH'&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; ~/.bash_profile
&lt;span class="nb"&gt;source&lt;/span&gt; ~/.bash_profile

&lt;span class="c"&gt;# check Maven version&lt;/span&gt;
mvn &lt;span class="nt"&gt;-v&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install AWS X-Ray Daemon
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;wget https://clouda-labs-assets.s3-us-west-2.amazonaws.com/aws-xray/aws-xray-daemon-linux-3.2.0.zip
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;unzip
unzip aws-xray-daemon-linux-3.2.0.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Configure CloudWatch (CW) agent
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Install CW agent&lt;/span&gt;
wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
&lt;span class="nb"&gt;sudo &lt;/span&gt;dpkg &lt;span class="nt"&gt;-i&lt;/span&gt; amazon-cloudwatch-agent.deb

&lt;span class="c"&gt;# create CW config file: this will create log groups for the API and X-Ray logs &lt;/span&gt;
&lt;span class="c"&gt;# (which will be collected from logs stored in files configured for the X-Ray and API system services&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /home/ubuntu/cw-config.json &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
{
  "agent": {
    "metrics_collection_interval": 60,
    "logfile": "/var/log/cw-agent.log"
  },
  "logs": {
    "logs_collected": {
      "files": {
        "collect_list": [
          {
            "file_path": "/home/ubuntu/api.log",
            "log_group_name": "xray-api-logs",
            "log_stream_name": "{instance_id}-app",
            "timestamp_format": "%d-%m-%Y %H:%M:%S"
          },
          {
            "file_path": "/home/ubuntu/xray-daemon.log",
            "log_group_name": "aws-xray-logs",
            "log_stream_name": "{instance_id}-xray",
            "timestamp_format": "%d-%m-%Y %H:%M:%S"
          }
        ]
      }
    }
  }
}
&lt;/span&gt;&lt;span class="no"&gt;EOF

&lt;/span&gt;&lt;span class="c"&gt;# Start running CW agent&lt;/span&gt;
&lt;span class="nb"&gt;sudo&lt;/span&gt; /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl &lt;span class="nt"&gt;-a&lt;/span&gt; fetch-config &lt;span class="nt"&gt;-m&lt;/span&gt; ec2 &lt;span class="nt"&gt;-c&lt;/span&gt; file:/home/ubuntu/cw-config.json &lt;span class="nt"&gt;-s&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Clone app from GitHub and setup
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone&lt;/span&gt;
git clone https://github.com/khairahscorner/xray-app.git

&lt;span class="c"&gt;# Set properties&lt;/span&gt;
&lt;span class="nb"&gt;cp &lt;/span&gt;xray-app/src/main/resources/application-example.properties application.properties
&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;a+w application.properties

&lt;span class="c"&gt;# Add needed variables to the properties file&lt;/span&gt;
aws.region&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;value&amp;gt;
dynamodb.table&lt;span class="o"&gt;=&lt;/span&gt;&amp;lt;value&amp;gt;

&lt;span class="c"&gt;# Package JAR&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;xray-app
mvn package
&lt;span class="nb"&gt;mv &lt;/span&gt;target/xray-app-1.jar target/app.jar &lt;span class="c"&gt;#rename jar file&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Setup System Service files
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# service file for the X-Ray daemon&lt;/span&gt;
&lt;span class="nb"&gt;sudo cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/systemd/system/xray_daemon.service &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
[Unit]
Description=AWS X-Ray Daemon
After=network.target

[Service]
ExecStart=/home/ubuntu/xray --region eu-west-2
Restart=always
User=ubuntu
StandardOutput=file:/home/ubuntu/xray-daemon.log
StandardError=file:/home/ubuntu/xray-daemon.log

[Install]
WantedBy=multi-user.target
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# service file for the Java API itself&lt;/span&gt;
&lt;span class="nb"&gt;sudo cat&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /etc/systemd/system/java_api.service &lt;span class="o"&gt;&amp;lt;&amp;lt;&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="no"&gt;EOF&lt;/span&gt;&lt;span class="sh"&gt;'
[Unit]
Description=XRay Java API
After=network.target

[Service]
User=ubuntu
ExecStart=/usr/lib/jvm/java-11-openjdk/bin/java -jar /home/ubuntu/xray-app/target/app.jar --spring.config.location=file:/home/ubuntu/application.properties
SuccessExitStatus=143
Restart=on-failure
RestartSec=10
StandardOutput=file:/home/ubuntu/api.log
StandardError=file:/home/ubuntu/api.log

[Install]
WantedBy=multi-user.target
&lt;/span&gt;&lt;span class="no"&gt;EOF
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Start Services
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Start X-Ray Daemon&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;xray_daemon
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start xray_daemon

&lt;span class="c"&gt;# Start Java API service&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl daemon-reload
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start java_app
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;java_app

&lt;span class="c"&gt;# check service statuses&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status xray_daemon
&lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl status java_app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3) Deployed App
&lt;/h2&gt;

&lt;p&gt;You should now be able to access the app with the instance iP address/DNS:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://&amp;lt;public ip address/dns&amp;gt;:8080/post&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;http://&amp;lt;public ip address/dns&amp;gt;:8080/get?id=1&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11n8muk98vlc5yfo5exv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11n8muk98vlc5yfo5exv.png" alt="java api get endpoint" width="800" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftg33g0195tizs3vr6dv8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftg33g0195tizs3vr6dv8.png" alt="java api get error endpoint" width="800" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  X-Ray Traces
&lt;/h3&gt;

&lt;p&gt;Tracing makes it easy to spot failed endpoint calls, it also logs minimal details for requests that completed successfully.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjx073h3qp0c5spgs2zy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqjx073h3qp0c5spgs2zy.png" alt="AWS X-Ray traces" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvx9dnpg8pg2tudach972.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvx9dnpg8pg2tudach972.png" alt="AWS X-Ray traces" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4cykd6tjdpqz9wd8h1n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4cykd6tjdpqz9wd8h1n.png" alt="AWS X-Ray traces" width="800" height="728"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  CloudWatch Logs
&lt;/h3&gt;

&lt;p&gt;Configuring the CloudWatch agent was a last minute add-on because I wanted to store the app logs somewhere more visible than a file within the instance. I configured the agent to collect the logs from both the api and daemon log files, and this gave me an overview of how log groups keep logs separate and organised.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ky3h8qcsukftsnxaoqj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ky3h8qcsukftsnxaoqj.png" alt="Amazon cloudwatch logs" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jhexxfdwbldkotvtb01.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jhexxfdwbldkotvtb01.png" alt="Amazon cloudwatch logs" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting
&lt;/h2&gt;

&lt;p&gt;I ran into quite a number of issues but resolved them by spending considerable time with AWS SDK documentation and ChatGPT (mostly debugging the Java code that I modified).&lt;/p&gt;

&lt;p&gt;Overall:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;ensure you have configured the IAM role with the right access permissions for all the necessary services and attached it correctly to the EC2 instance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you’re modifying the app code, ensure it still works correctly with AWS Java SDK v2. You can test the app locally to verify but have AWS SSO or access keys configured locally.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Useful References:&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started-auth.html" rel="noopener noreferrer"&gt;AWS Authentication with SDK for Java&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/migration.html" rel="noopener noreferrer"&gt;Migrating AWS Java SDK v1 to v2&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html" rel="noopener noreferrer"&gt;SDK Credential Providers&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;With this project, I’m a bit more comfortable using X-Ray for traces and sending logs over to CloudWatch. The automated version will allow me to get a refresher on working with Ansible, so that’s up next!&lt;/p&gt;

&lt;p&gt;Thanks for reading!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Till Next Time&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>observability</category>
      <category>cloudwatch</category>
      <category>xray</category>
    </item>
    <item>
      <title>Leveraging Lambda Functions (with Amazon SNS, S3 &amp; EventBridge)</title>
      <dc:creator>Airat Yusuff</dc:creator>
      <pubDate>Mon, 28 Jul 2025 13:06:39 +0000</pubDate>
      <link>https://dev.to/aws-builders/leveraging-lambda-functions-with-amazon-sns-s3-eventbridge-4pob</link>
      <guid>https://dev.to/aws-builders/leveraging-lambda-functions-with-amazon-sns-s3-eventbridge-4pob</guid>
      <description>&lt;p&gt;For the longest time, I couldn’t quite grasp or fully understand how serverless services on AWS like Lambda functions ACTUALLY work in real-world scenarios and products. Partly because I wasn’t too familiar with event-driven architecture or built projects that fully implemented it.&lt;/p&gt;

&lt;p&gt;That has now changed because I finally took some time to explore it with a scenario use-case. &lt;strong&gt;In this post&lt;/strong&gt;, you’ll learn how to use AWS Lambda, S3, Amazon SNS, and EventBridge to build yourself a nice little weather alert system.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Summary
&lt;/h3&gt;

&lt;p&gt;I modified the &lt;a href="https://github.com/ifeanyiro9/game-day-notifications" rel="noopener noreferrer"&gt;second project of the #DevOpsAllStarsChallenge&lt;/a&gt; (mainly because I didn’t want to use the sports data API) so instead, I built a notification system that sends email alerts of weather data for the city of Manchester, UK. It’s made up of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;a Lambda function that sends a summary of the current weather data for the city to a user email subscribed to an SNS topic; this is hooked up to an EventBridge rule with a morning, afternoon and evening schedule.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;another Lambda function that retrieves the most recently added weather data file to the S3 bucket from &lt;a href="https://khairahscorner.hashnode.dev/build-and-deploy-weather-app-using-streamlit-and-aws-ecs-with-fargate" rel="noopener noreferrer"&gt;this project&lt;/a&gt; and sends the summary to the user email subscribed to the same SNS topic; it is triggered whenever a new search is made via the dashboard and saved to storage.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;You can read the full project and its feature summary &lt;a href="https://github.com/khairahscorner/notification_alerts" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;One thing I’ve kept in mind since I started the challenge is to not underestimate the power of putting together an architectural diagram. It may not be the best but I quickly realised that drawing up a solution has helped me visualise and build out the projects better.&lt;/p&gt;

&lt;p&gt;This was the first diagram I drew but eventually switched to using native S3 Event notifications for the &lt;code&gt;fetch from S3&lt;/code&gt; function for reasons explained further below (but I still included how to set up the approach in the diagram).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgplfvhkz0ubuf5g5m5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzgplfvhkz0ubuf5g5m5x.png" alt="Project architecture" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Build Process
&lt;/h3&gt;

&lt;p&gt;I first tried building it all out via the AWS console but as with &lt;a href="https://khairahscorner.hashnode.dev/build-and-deploy-weather-app-using-streamlit-and-aws-ecs-with-fargate" rel="noopener noreferrer"&gt;the first project&lt;/a&gt;, I wrote scripts to build it all using &lt;code&gt;aws-cli&lt;/code&gt; commands instead. The &lt;a href="https://awscli.amazonaws.com/v2/documentation/api/latest/index.html" rel="noopener noreferrer"&gt;aws-cli documentation&lt;/a&gt; was my main source of truth and I learnt a lot navigating my way around for the different services I needed for the project.&lt;/p&gt;

&lt;p&gt;I wrote a brief note on how I worked through each section of the architecture &lt;a href="https://github.com/khairahscorner/notification_alerts/blob/main/docs.md" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  Repo structure:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqr73zs9dsmsr1sp2nsqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqr73zs9dsmsr1sp2nsqb.png" alt="GitHub repo folder structure" width="800" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Gotchas
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;I fell into the trap of not specifying the &lt;code&gt;—region&lt;/code&gt; option so many times. I was working in the &lt;code&gt;eu-west-2&lt;/code&gt; region but I still haven’t updated my IDE’s credentials and config files from &lt;code&gt;us-east-1&lt;/code&gt;, so I spent a lot more time resolving errors that eventually turned out to be region mismatches.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pay extra attention when creating policies and roles, one can easily make mistakes inlining policy documents with JSON formats.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It got a bit tricky creating the Lambda functions via &lt;code&gt;aws-cli&lt;/code&gt; commands, especially with adding the function code. I had to ensure that I created the &lt;code&gt;.zip&lt;/code&gt; files with the most updated versions of the code files; the names used also need to match the handlers defined for the functions + the main execution methods in the code.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;e.g For &lt;code&gt;alerts_lambda.py&lt;/code&gt; that uses a &lt;code&gt;process()&lt;/code&gt; function for the main execution, the zip file was named &lt;code&gt;alerts_lambda&lt;/code&gt; and the handler in my scripts was &lt;code&gt;alerts_lambda.process&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; aws lambda create-function --function-name $Function1Name --runtime python3.9 \
        --zip-file fileb://src/alerts_lambda.zip --handler alerts_lambda.process \
        --timeout 10 --role arn:aws:iam::${AWS_ACCOUNT_ID}:role/$Function1RoleName
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;The biggest problem I encountered definitely has to be setting up the EventBridge rules: &lt;strong&gt;I couldn’t get the rule using an S3 event pattern to work!&lt;/strong&gt; Despite trying many fixes, it just wasn’t invoking the Lambda function when an object gets placed in the bucket.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Turns out that I had completely ignored the alert clearly informing users to turn on the configuration that allows S3 buckets to send notifications to EventBridge for all events that occur within the bucket (perk of using the management console, I guess).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F202fznj8gvzm3qmgyw29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F202fznj8gvzm3qmgyw29.png" alt="s3-event notifications alert" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In hindsight, the entire ordeal made me learn about the optimal choice to consider when you need to invoke Lambda functions with simple S3 event triggers: &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventNotifications.html" rel="noopener noreferrer"&gt;S3 Event Notifications&lt;/a&gt;.&lt;/p&gt;

&lt;h4&gt;
  
  
  S3 Event Notifications
&lt;/h4&gt;

&lt;p&gt;This is a more native approach that can be used to configure what should happen when specified events occur within a S3 bucket. The destinations can be services like Lambda, SNS, or even EventBridge. In this case, I wanted to invoke the Lambda function that handles the data retrieval, preprocessing and sending to SNS, so Lambda it was.&lt;/p&gt;

&lt;p&gt;Another benefit of using this approach is its &lt;strong&gt;near-instant “activation”&lt;/strong&gt;. With an EventBridge rule, there might be some delay before it starts working. EventBridge is also better suited for complex event patterns but this was just a simple case of monitoring for the addition of new objects to a S3 bucket.&lt;/p&gt;

&lt;p&gt;However, this does not mean it cannot be implemented using EventBridge; just ensure to turn on the configuration mentioned above.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Setup with S3 Event Notification&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3api put-bucket-notification-configuration \
    --bucket $BUCKET_NAME --region $AWS_REGION \
    --notification-configuration '{
        "LambdaFunctionConfigurations": [
            {
                "LambdaFunctionArn": "arn:aws:lambda:'"$AWS_REGION"':'"$AWS_ACCOUNT_ID"':function:'"$FunctionName"'",
                "Events": ["s3:ObjectCreated:*"]
            }
        ]
    }'

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Setup with EventBridge using event pattern&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
EVENT_PATTERN=$(cat &amp;lt;&amp;lt;EOF
{
  "source": ["aws.s3"],
  "detail-type": ["Object Created"],
  "detail": {
    "bucket": {
      "name": ["$BUCKET_NAME"]
    }
  }
}
EOF
)

aws events put-rule --name $TRIGGER_NAME --event-pattern "$EVENT_PATTERN" --region $AWS_REGION

aws events put-targets --rule $TRIGGER_NAME \
        --targets "Id"="1","Arn"="arn:aws:lambda:$AWS_REGION:$AWS_ACCOUNT_ID:function:$FunctionName"

aws lambda add-permission \
        --function-name $FunctionName \
        --statement-id $EventBridgePermissionStatementId \
        --action "lambda:InvokeFunction" \
        --principal "events.amazonaws.com" \
        --source-arn "arn:aws:events:$AWS_REGION:$AWS_ACCOUNT_ID:rule/$TRIGGER_NAME" \
        --region $AWS_REGION

aws s3api put-bucket-notification-configuration --bucket $BUCKET_NAME --region $AWS_REGION \
    --notification-configuration='{ "EventBridgeConfiguration": {} }'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;This was a pretty good project and I’m glad I modified the idea and worked on it using both management console and &lt;code&gt;aws-cli&lt;/code&gt; commands.&lt;/p&gt;

&lt;p&gt;I’ll be skipping &lt;a href="https://github.com/alahl1/NBADataLake" rel="noopener noreferrer"&gt;Project 3&lt;/a&gt; as I was not that much interested in the services it uses; but definitely saved for when I might want to try it out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ifeanyiro9/containerized-sports-api" rel="noopener noreferrer"&gt;Project 4&lt;/a&gt; is also out now and it looks pretty good: uses ECS (Fargate) that I already tried &lt;a href="https://khairahscorner.hashnode.dev/build-and-deploy-weather-app-using-streamlit-and-aws-ecs-with-fargate" rel="noopener noreferrer"&gt;here&lt;/a&gt;, services that I want to refresh my knowledge on (API Gateway, Load Balancing) and some other interesting ones for the enhancements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Till Next time✨&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>sns</category>
      <category>s3</category>
      <category>eventbridge</category>
    </item>
    <item>
      <title>Deploying Scalable APIs With Terraform and GitHub Actions</title>
      <dc:creator>Airat Yusuff</dc:creator>
      <pubDate>Fri, 25 Jul 2025 16:02:57 +0000</pubDate>
      <link>https://dev.to/aws-builders/deploying-scalable-apis-with-terraform-and-github-actions-hoj</link>
      <guid>https://dev.to/aws-builders/deploying-scalable-apis-with-terraform-and-github-actions-hoj</guid>
      <description>&lt;p&gt;Scalability is a fundamental concept to consider when building efficient cloud solutions. Along with security, they are highlighted in &lt;a href="https://www.wellarchitectedlabs.com/" rel="noopener noreferrer"&gt;&lt;strong&gt;Well-Architected&lt;/strong&gt;&lt;/a&gt; framework pillars like &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/cost-optimization-pillar/manage-demand-and-supply-resources.html" rel="noopener noreferrer"&gt;&lt;strong&gt;Cost Optimisation&lt;/strong&gt;&lt;/a&gt; and &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/design-principles.html" rel="noopener noreferrer"&gt;&lt;strong&gt;Reliability&lt;/strong&gt;&lt;/a&gt;. I explored these two concepts in the third project I built as part of the &lt;strong&gt;#DevOpsAllStarsChallenge&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In this post&lt;/strong&gt;, you will learn how to deploy a scalable public-facing API using AWS services like API Gateway and Elastic Container Service (ECS). In a way, it enhances &lt;a href="https://khairahscorner.hashnode.dev/build-and-deploy-weather-app-using-streamlit-and-aws-ecs-with-fargate" rel="noopener noreferrer"&gt;the first project&lt;/a&gt; for the challenge by rearchitecting the infrastructure for security and using extra tooling like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; to provision/manage the infrastructure, and&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Actions&lt;/strong&gt; for automated CD workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Project Summary
&lt;/h3&gt;

&lt;p&gt;I created a containerised API management system for querying weather data - a web API built with Flask. It exposes a &lt;code&gt;/weather&lt;/code&gt; endpoint that takes a &lt;code&gt;city&lt;/code&gt; query parameter (and if the parameter is not provided, it uses a default &lt;code&gt;city&lt;/code&gt; value of Manchester).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1738828557777%2Fc03e7d8d-0a97-4dc4-8dc7-7320dd95798d.gif%3Fauto%3Dformat%2Ccompress%26gif-q%3D60%26format%3Dwebm" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1738828557777%2Fc03e7d8d-0a97-4dc4-8dc7-7320dd95798d.gif%3Fauto%3Dformat%2Ccompress%26gif-q%3D60%26format%3Dwebm" alt="overview of project results" width="720" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Main Concepts Covered
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API management&lt;/strong&gt; with API Gateway and Application Load Balancers for security and scalability&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Public and private subnetting&lt;/strong&gt; architecture for enhanced resource protection&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Infrastructure-as-Code&lt;/strong&gt; with Terraform&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Terraform state management&lt;/strong&gt; with backend blocks (S3 type)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Continuous Deployment&lt;/strong&gt; with GitHub Actions&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Read more about the project on &lt;a href="https://github.com/khairahscorner/scalable-containerised-api" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwvv87xucarcdn9fgm68.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwvv87xucarcdn9fgm68.png" alt="project architecture" width="800" height="471"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This architecture would also work without the API gateway i.e., the load balancer is internet-facing (in a public subnet), so it has an accessible DNS name. However, I wanted to try API gateway so it was a good opportunity to learn about restricting access to load balancers through security groups.&lt;/p&gt;

&lt;h3&gt;
  
  
  Repo Structure
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdm00ch3jshk2w6b9cgx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhdm00ch3jshk2w6b9cgx.png" alt="repo structure" width="800" height="643"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure Configuration
&lt;/h3&gt;

&lt;p&gt;I used Terraform to provision the entire architecture, including the ECS service creation (which needs an image already pushed to ECR for the task definition). This will be covered further below.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;All Terraform files can be found &lt;a href="https://github.com/khairahscorner/scalable-containerised-api/tree/main/terraform" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Step 1: AWS Environment Setup
&lt;/h4&gt;

&lt;p&gt;The following resources are set up within the &lt;code&gt;aws_environment&lt;/code&gt; module:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1) custom VPC with public and private subnets:&lt;/strong&gt; This was easy to set up using the Terraform &lt;strong&gt;&lt;a href="https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/latest" rel="noopener noreferrer"&gt;AWS VPC module&lt;/a&gt;&lt;/strong&gt;. You configure the module and Terraform will handle the creation of all relevant resources without needing to explicitly write the code yourself.&lt;/p&gt;

&lt;p&gt;I chose &lt;code&gt;10.16.0.0/16&lt;/code&gt; for the VPC (i.e., 65,536 available IP addresses from &lt;code&gt;10.16.0.0&lt;/code&gt; to &lt;code&gt;10.16.255.255&lt;/code&gt;) and subnets (&lt;code&gt;10.16.12.0/24&lt;/code&gt;, &lt;code&gt;10.16.24.0/24&lt;/code&gt;, &lt;code&gt;10.16.36.0/24&lt;/code&gt; and &lt;code&gt;10.16.48.0/24&lt;/code&gt; with 256 IP addresses each).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.techtarget.com/whatis/definition/RFC-1918#:~:text=The%20RFC%20reserves%20the%20following%20ranges%20of%20IP%20addresses%20that%20cannot%20be%20routed%20on%20the%20Internet" rel="noopener noreferrer"&gt;RFC 1918 on reserved ranges for private IP addresses&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;NAT gateway is enabled to ensure the resources in private subnets can communicate with resources placed in public subnets (or the internet).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2) Access management:&lt;/strong&gt; IAM roles, policies to attach to the roles, security groups (allow only traffic from API gateway into load balancer, allow only traffic from load balancer into ECS service).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) ECR repository&lt;/strong&gt; for the container images.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: ECS Setup
&lt;/h3&gt;

&lt;p&gt;Found in &lt;code&gt;modules/ecs_setup&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1) Application Load Balancer&lt;/strong&gt;, including target group and listener;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2) ECS:&lt;/strong&gt; cluster, task definition for the service, and the service itself.&lt;/p&gt;

&lt;p&gt;Step 3: API Gateway&lt;br&gt;
Found in &lt;code&gt;modules/api_gateway_setup&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1)&lt;/strong&gt; All required configurations to create the gateway and integrate with the load balancer&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2)&lt;/strong&gt; Update security group for the load balancer to ensure it only allows incoming traffic from the API gateway IP ranges.&lt;/p&gt;
&lt;h3&gt;
  
  
  Continuous Deployment
&lt;/h3&gt;

&lt;p&gt;I used GitHub Actions to automate the infrastructure deployment and management process. First, I outlined the jobs I wanted to run and the steps for each of them. This was continuously refined as I worked through the resources to be deployed.&lt;/p&gt;
&lt;h4&gt;
  
  
  Workflow Structure
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrhmdef257jswqa2v1oa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkrhmdef257jswqa2v1oa.png" alt="GHA workflow file" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also experimented with managing deployments using &lt;strong&gt;&lt;a href="https://docs.github.com/en/actions/managing-workflow-runs-and-deployments/managing-deployments/managing-environments-for-deployment" rel="noopener noreferrer"&gt;GHA’s environments&lt;/a&gt;&lt;/strong&gt;. This was a way of adding an extra level of authorisation with Terraform, by configuring the environment to have deployment protection rules.&lt;/p&gt;

&lt;p&gt;In this instance, I added myself as a required reviewer; so for every run of the workflow, the &lt;code&gt;aws_environment_setup&lt;/code&gt; job needs an approval because it uses the configured environment. Once approved, all jobs with &lt;code&gt;terraform apply&lt;/code&gt; commands run with the auto-approval option specified in the command.&lt;/p&gt;
&lt;h3&gt;
  
  
  Debugging Issues
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1) Terraform Syntax errors&lt;/strong&gt;&lt;br&gt;
If you are new to provisioning these resources with Terraform, you will rely a lot on the documentation; it takes a lot of time and effort to ensure you properly configure what you need.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2) Resources With Dependencies&lt;/strong&gt;&lt;br&gt;
I realised that I needed to split the way I provision the resources e.g the ECR repository needs to be already set up before I could run a script to create and push the API’s Docker image to the repo, and the image is then used for the ECS service task definition.&lt;/p&gt;

&lt;p&gt;As a result, I had a job that ran a partial &lt;code&gt;terraform apply&lt;/code&gt; with the &lt;code&gt;-target&lt;/code&gt; option for &lt;code&gt;aws_environment&lt;/code&gt; module, the next job runs the script, and the next job runs &lt;code&gt;terraform apply&lt;/code&gt; as a whole to provision the rest of the infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) Remote Terraform State Management&lt;/strong&gt;&lt;br&gt;
This was one of the issues I spent quite some time on. Terraform uses state files to keep track of changes it has applied to provision infrastructure and new additions to the config files. However, this is different when running Terraform via GitHub Actions because the state files don’t get pushed to live.&lt;/p&gt;

&lt;p&gt;I learnt about the usage of &lt;strong&gt;&lt;a href="https://developer.hashicorp.com/terraform/language/backend" rel="noopener noreferrer"&gt;backend block&lt;/a&gt;&lt;/strong&gt; to manage where Terraform stores its state files; but this was quite tricky because of the split provisioning I needed to do. Eventually, I was able to initialise the backend correctly (I used the S3 type, so the state file was stored in a S3 bucket).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  backend "s3" {
    bucket = "devops-challenge-tf-state-files"
    key    = "files/terraform.tfstate"
    region = "eu-west-2"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4) Gateway URL returning error messages&lt;/strong&gt;&lt;br&gt;
I encountered a network/endpoint error during one of the deployments, and it happened after I changed the &lt;code&gt;cidr_block&lt;/code&gt; for my load balancer to only accept traffic from the public subnets (with the intention that it should only allow traffic from the gateway).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;However, API Gateway is a fully managed service&lt;/strong&gt; and it uses managed IPs i.e you did not configure them, so they are not within your custom VPC/subnets. It took me looking through my codes again to realise I never specified my VPC/subnets in my gateway configurations, so I could not assume the gateway resource was within my VPC.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;aws_ip_ranges&lt;/code&gt; instead: a data source for getting IP ranges of AWS services. &lt;strong&gt;The downside&lt;/strong&gt; however, is that it causes failures when running &lt;code&gt;terraform destroy&lt;/code&gt;. I had to retrieve the IP addresses manually (from the workflow logs) to destroy my setup.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also encountered &lt;em&gt;‘Missing Authentication Token’&lt;/em&gt; errors when trying to access the gateway’s base URL or &lt;code&gt;/health&lt;/code&gt; for load balancer health checks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2rhko0yrie45jdbbxhj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2rhko0yrie45jdbbxhj.png" alt="error message" width="800" height="204"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This was because you need to configure a resource, method, and integration for each endpoint.&lt;/p&gt;




&lt;h3&gt;
  
  
  Conclusions
&lt;/h3&gt;

&lt;p&gt;This was a comprehensive project that required a lot of hours with lots of debugging; I was able to acquire new knowledge by building it incrementally and going back to improve it even more.&lt;/p&gt;

&lt;p&gt;I still need to read more on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;how to configure an API gateway to route requests from its root url to the load balancer (and the service target group);&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;figure out how to retrieve the IP ranges for API gateway to restrict access to the load balancer;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;how to configure health checks correctly.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I’d also want to explore these &lt;a href="https://github.com/khairahscorner/scalable-containerised-api/tree/main/enhancements-todo" rel="noopener noreferrer"&gt;enhancements&lt;/a&gt; that use Elasticache and DynamoDB.&lt;/p&gt;

&lt;p&gt;There have been more videos published for the challenge but thankfully, they are IaC versions of existing projects, so I’m not lagging behind much.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Till Next time✨&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>githubactions</category>
      <category>ecs</category>
    </item>
    <item>
      <title>Build and Deploy Streamlit (Python) App on AWS ECS with Fargate</title>
      <dc:creator>Airat Yusuff</dc:creator>
      <pubDate>Fri, 25 Jul 2025 15:44:01 +0000</pubDate>
      <link>https://dev.to/aws-builders/build-and-deploy-streamlit-python-app-on-aws-ecs-with-fargate-9hk</link>
      <guid>https://dev.to/aws-builders/build-and-deploy-streamlit-python-app-on-aws-ecs-with-fargate-9hk</guid>
      <description>&lt;p&gt;I came across Streamlit for the first time, and it was a great way to convert a Python API into a browser-accessible web app. &lt;br&gt;
&lt;strong&gt;In this post,&lt;/strong&gt; I’ll be sharing how I worked on the first project, a weather data collection system that uses Openweather API.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/ShaeInTheCloud/30days-weather-dashboard" rel="noopener noreferrer"&gt;This was the original project&lt;/a&gt; but the whole purpose of the challenge is to learn and participants are free to enhance the projects however they see fit, so I modified it into an UI-facing app instead. I also deployed the app into the AWS ecosystem and mostly used scripts to batch the entire process after knowing the steps I wanted to carry out.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Mine: &lt;a href="https://github.com/khairahscorner/weather-dashboard" rel="noopener noreferrer"&gt;https://github.com/khairahscorner/weather-dashboard&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3&gt;
  
  
  Concepts Covered
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Python app development (Streamlit)&lt;/li&gt;
&lt;li&gt;Infrastructure as Code (Python SDK, AWS CLI commands)&lt;/li&gt;
&lt;li&gt;Cloud Storage (AWS S3)&lt;/li&gt;
&lt;li&gt;Containerisation (Docker)&lt;/li&gt;
&lt;li&gt;Container App Deployment (AWS ECS with Fargate)&lt;/li&gt;
&lt;li&gt;CI/CD (GitHub Actions, AWS)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  App Features
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Fetches real-time weather data for any city of choice (input via UI)&lt;/li&gt;
&lt;li&gt;Displays weather conditions like temperature (°F), humidity, etc&lt;/li&gt;
&lt;li&gt;Automatically saves weather data as JSON files in AWS S3 with timestamps for historical tracking&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  1) Building and Running the App Locally
&lt;/h3&gt;

&lt;p&gt;The full script was already provided, so I focused on modifying it to work as a Streamlit app; my full codes can be found on &lt;a href="https://github.com/khairahscorner/weather-dashboard" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;I modified the &lt;code&gt;requirements.txt&lt;/code&gt; to include &lt;code&gt;streamlit&lt;/code&gt; and one other dependency (&lt;code&gt;pyarrow&lt;/code&gt;) that kept causing an issue.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Run python apps/scripts in virtual environments to ensure installed dependencies are isolated, so:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create an environment e.g named .venv
virtualenv -p python3 .venv

#activate
source .venv/bin/activate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Run &lt;code&gt;pip install -r requirements.txt&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Start app with &lt;code&gt;streamlit run &amp;lt;file_name&amp;gt;&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  2) Running as a Container
&lt;/h3&gt;

&lt;p&gt;Since I also wanted to deploy the app, I tried running it as a container, so I created a &lt;code&gt;Dockerfile&lt;/code&gt;. You can also follow the &lt;a href="https://docs.streamlit.io/deploy/tutorials/docker" rel="noopener noreferrer"&gt;instruction guide here&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.9-slim
COPY . /app
WORKDIR /app
RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 8501
HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health
ENTRYPOINT ["streamlit", "run", "dashboard.py", "--server.port=8501", "--server.address=0.0.0.0"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Build the image and run (&lt;code&gt;—-env-file&lt;/code&gt; needs to be specified so that your app knows where to look to load the environment variables).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build image
docker build -t streamlit_app . 

# Run container app
docker run --env-file .env -p 8501:8501 streamlit_app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3) Deploying to AWS
&lt;/h3&gt;

&lt;p&gt;Since I wanted to deploy to AWS as a container app, some easy options were likely Elastic Beanstalk and App Runner. However, I wanted to practise working with Amazon ECS so I went with it instead.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/getting-started.html" rel="noopener noreferrer"&gt;AWS Guides on ECS&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To deploy the app to ECS, you need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;set up a repository in Amazon ECR and then push the image to the repository;&lt;/li&gt;
&lt;li&gt;setup a task definition in ECS and create a cluster, then create a service in the cluster using the task definition.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are some configurations that need to be done to use ECS but I had done something similar before (albeit unsuccessful), so these permissions and roles were already setup. However, I still added them in the scripts that I used.&lt;/p&gt;

&lt;p&gt;Run the scripts using &lt;code&gt;sh &amp;lt;script_name&amp;gt;&lt;/code&gt; in the correct order (all scripts used can be found &lt;a href="https://github.com/khairahscorner/weather-dashboard/tree/main/deployment" rel="noopener noreferrer"&gt;here&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Everything worked successfully and I could access my running app via &lt;code&gt;&amp;lt;ip-address&amp;gt;:8501&lt;/code&gt; (that was the port I mapped in the task definition).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1736707841784%2F8c140a68-43d8-4fb0-b3fe-75af69bf3c4b.gif%3Fauto%3Dformat%2Ccompress%26gif-q%3D60%26format%3Dwebm" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1736707841784%2F8c140a68-43d8-4fb0-b3fe-75af69bf3c4b.gif%3Fauto%3Dformat%2Ccompress%26gif-q%3D60%26format%3Dwebm" alt="Project Results Here" width="600" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivc0y0j4zt38z0yfxp4o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivc0y0j4zt38z0yfxp4o.png" alt="Uploaded files to s3" width="800" height="389"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; using the IP address as-is leads to an issue when updating the service (forcing a new deployment to use the latest image for the task definition if your Docker image has been rebuilt).&lt;/p&gt;

&lt;p&gt;The task is replaced hence the IP address changes, so the recommended approach is to configure and use a Load Balancer. This will route traffic to the right running task and its DNS name remains static, so there will be no need to worry about the changing IP addresses of the tasks&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhancements
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Setup an Application Load Balancer to properly route traffic from the running task without needing to look for its IP address.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a GHA workflow to run the scripts on push to repo (i.e rebuild the image, push, and update the service)&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I also wanted to add a feature to retrieve five most recent data added to S3 for a specified city, but my focus was less on the app development and more on successfully working through the full end-to-end process i.e development to deployment.&lt;/p&gt;




&lt;p&gt;Overall, I’m quite happy with my additions and I got to learn more about deploying containerised apps to ECS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next Up:&lt;/strong&gt; &lt;a href="https://www.youtube.com/watch?v=09WfkKc0x_Q&amp;amp;" rel="noopener noreferrer"&gt;Project 2 (Game Day Notification Solution)&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>python</category>
      <category>streamlit</category>
      <category>ecs</category>
    </item>
    <item>
      <title>State Management in React Using MobX State Tree (MST)</title>
      <dc:creator>Airat Yusuff</dc:creator>
      <pubDate>Sat, 28 Sep 2024 11:00:00 +0000</pubDate>
      <link>https://dev.to/khairahscorner/state-management-in-react-using-mobx-state-tree-mst-4mmc</link>
      <guid>https://dev.to/khairahscorner/state-management-in-react-using-mobx-state-tree-mst-4mmc</guid>
      <description>&lt;p&gt;If you are a React developer, you most likely work with different state management libraries like &lt;a href="https://react-redux.js.org/introduction/why-use-react-redux" rel="noopener noreferrer"&gt;React Redux&lt;/a&gt;, &lt;a href="https://redux-toolkit.js.org/introduction/why-rtk-is-redux-today" rel="noopener noreferrer"&gt;Redux Toolkit&lt;/a&gt;, &lt;a href="https://tanstack.com/query/latest/docs/framework/react/overview" rel="noopener noreferrer"&gt;React Query&lt;/a&gt;, or even &lt;a href="https://github.com/pmndrs/zustand" rel="noopener noreferrer"&gt;Zustand&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;But, have you ever tried using MobX or MobX-State-Tree (MST)?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this post, I'll share a brief overview of working with MST and why you should consider exploring the library for state management in your next project (+ why you probably may not!)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyd65kjywx1r5e1zpmmp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foyd65kjywx1r5e1zpmmp.png" alt="mobx state tree github repo summary" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Brief Overview: What even is &lt;strong&gt;MobX-State-Tree?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://mobx-state-tree.js.org/intro/welcome" rel="noopener noreferrer"&gt;&lt;strong&gt;MobX State Tree&lt;/strong&gt;&lt;/a&gt; &lt;strong&gt;(&lt;/strong&gt;MST for short) is a state management library that self-describes as offering &lt;strong&gt;&lt;em&gt;"fully-featured reactive state management without the boilerplate"&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Some of the features it offers out-of-box include: centralised stores for your data, ability to mutate data safely while also having access to trace those updates, runtime and static type checking, etc.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Useful Read:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://mobx-state-tree.js.org/intro/philosophy" rel="noopener noreferrer"&gt;MST Philosophy&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why Try It?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;It's a lot less boilerplate than working with Redux!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If there was anything I really liked about using MST, it's the compactness and manageability. Of course, you can have large stores (trees) that are also part of a root store, but I found managing stores a lot easier with MST.&lt;/p&gt;

&lt;p&gt;It's a lot less brainwork when you don't have to worry about defining constants as types for actions, or reducers that manage the states all in different files, not to mention managing them across multiple folders.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It also gives you a balance between mutability and immutability.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Immutability is one of the ways Redux safely handles data management, but that means having go through all of the above to update the 'immutable' data. On the other hand, MST provides a balance between the two. It's easier to update data within the protection measures the library offers.&lt;/p&gt;

&lt;p&gt;Its official documentation also includes a &lt;a href="https://mobx-state-tree.js.org/intro/getting-started" rel="noopener noreferrer"&gt;pretty decent tutorial&lt;/a&gt; if you are new to using the library and would like to gain a basic understanding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why You Probably May Not Like It
&lt;/h2&gt;

&lt;p&gt;Diving into MST can be proper confusing at first. I get it, I've been there before.&lt;/p&gt;

&lt;p&gt;When you advance from the basic usage described in the tutorial and need it for a complex project or to utilise its advanced concepts, &lt;a href="https://mobx-state-tree.js.org/overview/api" rel="noopener noreferrer"&gt;the official documentation&lt;/a&gt; is not the best to understand concepts, in my opinion. I found myself struggling a lot, and had to keep inferring my understanding from existing stores that had been defined in the project I was working on.&lt;/p&gt;

&lt;p&gt;While it does seem to be a popular library, there's barely any &lt;a href="https://stackoverflow.com/questions/tagged/mobx-state-tree" rel="noopener noreferrer"&gt;engagement on Stack Overflow&lt;/a&gt;, so you might find yourself stuck and trying to figure out error messages yourself.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Not to be confused with MobX itself&lt;/strong&gt;, which has slightly more questions and engagement.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is also a very opinionated library with strict rules on how data should be structured and how trees interact with one another. While you may be familiar with many of the basic concepts if you've worked with state management libraries before (e.g views, actions), they have more complexities that make them work a bit differently (and can be frustrating at first).&lt;/p&gt;

&lt;p&gt;It also has several new concepts that you will need to learn to maximise its "fully-featured" benefits (&lt;strong&gt;&lt;em&gt;and&lt;/em&gt;&lt;/strong&gt; you'll have to be the better judge for what works or does not work for your project).&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;I cannot say I fully understand how the library works yet, but so far, I have learned more about its usage and how to utilise it in projects. Working with an already established store design can also make it easier for you to figure out how to use its regular features.&lt;/p&gt;

&lt;p&gt;In addition, despite being opinionated, the library still gives room for flexibility for developers to shape their store management in different ways based on preference. It also offers more features that can make state management easier, &lt;strong&gt;and&lt;/strong&gt; reduces boilerplate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overall, one to consider incorporating into your next project.&lt;/strong&gt;&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;P.S&lt;/strong&gt; Some Bookmarks to help you navigate through the official documentation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://mobx-state-tree.js.org/tips/faq" rel="noopener noreferrer"&gt;FAQs&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://mobx-state-tree.js.org/concepts/using-react" rel="noopener noreferrer"&gt;Using MST with React&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://mobx-state-tree.js.org/overview/types" rel="noopener noreferrer"&gt;Learn about MST types&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

</description>
      <category>mobxstatetree</category>
      <category>statemanagement</category>
      <category>react</category>
      <category>redux</category>
    </item>
    <item>
      <title>Automate Deployment to AWS App Runner Using Terraform</title>
      <dc:creator>Airat Yusuff</dc:creator>
      <pubDate>Sat, 28 Sep 2024 03:12:36 +0000</pubDate>
      <link>https://dev.to/aws-builders/automate-deployment-to-aws-app-runner-using-terraform-4oo9</link>
      <guid>https://dev.to/aws-builders/automate-deployment-to-aws-app-runner-using-terraform-4oo9</guid>
      <description>&lt;p&gt;&lt;strong&gt;Ever run into a situation&lt;/strong&gt; where you manually deployed your API to AWS via the management console (one with lots of enviroment variables too!), only to realise you forgot to change the region of deployment?&lt;/p&gt;

&lt;p&gt;It may not be the exact scenario but forgetting to switch regions for deployments can be a common occurrence; and in my case, I had previously deployed to &lt;code&gt;us-east-1&lt;/code&gt; but now needed the deployment to be &lt;code&gt;eu-west-2&lt;/code&gt;. I tried to search for any console features that could allow me “copy” the service into the new region (or update the current region) but that did not seem to be available, so I was left with repeating the entire process. &lt;strong&gt;Using Infrastructure-as-code to automate provisioning of the AWS resources&lt;/strong&gt; would have saved me from this situation, hence this blogpost.&lt;/p&gt;

&lt;p&gt;In this post, I’ll be sharing the &lt;strong&gt;Terraform (and shell) scripts&lt;/strong&gt; I wrote to &lt;strong&gt;automate the deployment of a Node.js (Express) API&lt;/strong&gt; with environment variables, including a database deployed on &lt;strong&gt;&lt;a href="https://khairahscorner.hashnode.dev/migrate-your-local-mysql-database-to-aws-rds" rel="noopener noreferrer"&gt;AWS RDS with MySQL.&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A Node.js/Express API (e.g. &lt;strong&gt;&lt;a href="https://github.com/khairahscorner/SNEducate-api" rel="noopener noreferrer"&gt;one of my repos&lt;/a&gt;&lt;/strong&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Terraform installed on your local machine    &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Setup
&lt;/h3&gt;

&lt;p&gt;This was my directory structure:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1etic4urg4fv8ikstsxa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1etic4urg4fv8ikstsxa.png" alt="Nodejs API directory structure for IaC"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; As you will see below, I created shell scripts to run the necessary Terraform commands due to the number of environment variables I needed to add to the API service.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;variables.tf&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;variable "api_name" {&lt;/span&gt;
  &lt;span class="s"&gt;type = string&lt;/span&gt;
  &lt;span class="s"&gt;default =&lt;/span&gt; &lt;span class="c1"&gt;# name you want to give to your api service&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;

&lt;span class="s"&gt;variable "github_repo" {&lt;/span&gt;
  &lt;span class="s"&gt;type = string&lt;/span&gt;
  &lt;span class="s"&gt;default =&lt;/span&gt; &lt;span class="c1"&gt;# your GitHub repo url&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;

&lt;span class="s"&gt;variable "apprunner_connection_arn" {&lt;/span&gt;
  &lt;span class="s"&gt;type = string&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;span class="s"&gt;variable "vpc_connector_arn" {&lt;/span&gt;
    &lt;span class="s"&gt;type = string&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;

&lt;span class="s"&gt;variable "env_prod_db_username" {&lt;/span&gt;
  &lt;span class="s"&gt;type = string&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;span class="s"&gt;variable "env_prod_db_password" {&lt;/span&gt;
  &lt;span class="s"&gt;type = string&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;span class="s"&gt;variable "env_prod_fe_url" {&lt;/span&gt;
  &lt;span class="s"&gt;type = string&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;span class="c1"&gt;# other variables&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;provider.tf&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;terraform {&lt;/span&gt;
  &lt;span class="s"&gt;required_providers {&lt;/span&gt;
    &lt;span class="s"&gt;aws = {&lt;/span&gt;
      &lt;span class="s"&gt;source = "hashicorp/aws"&lt;/span&gt;
      &lt;span class="s"&gt;version = "5.67.0"&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;

&lt;span class="s"&gt;provider "aws" {&lt;/span&gt;
  &lt;span class="s"&gt;region = "eu-west-2"&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;output.tf&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="s"&gt;output "api_service_url" {&lt;/span&gt;
  &lt;span class="s"&gt;value = aws_apprunner_service.express-api-service.service_url&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;

&lt;span class="s"&gt;output "api_service_arn" {&lt;/span&gt;
  &lt;span class="s"&gt;value = aws_apprunner_service.express-api-service.arn&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;

&lt;span class="s"&gt;output "api_service_status" {&lt;/span&gt;
  &lt;span class="s"&gt;value = aws_apprunner_service.express-api-service.status&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;code&gt;main.tf&lt;/code&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;1) You need to have a connection to your repository provider (i.e GitHub or BitBucket) to enable access to that repo for App Runner; this can be done manually.&lt;/p&gt;

&lt;p&gt;The ARN for this connection is what you need for &lt;code&gt;var.apprunner_connection_arn&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jogigsyxkq2qg22r8cq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3jogigsyxkq2qg22r8cq.png" alt="AWS management console showing the App service connection needed"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2) You also need to create a VPC connector&lt;/strong&gt;; this can be done with Terraform in this script. However, I had an existing VPC connector for the region, so the provisioning is not included in my script.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;You can use the &lt;a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/apprunner_vpc_connector" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; to set it up.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;
&lt;span class="s"&gt;resource "aws_apprunner_service" "express-api-service" {&lt;/span&gt;
  &lt;span class="s"&gt;service_name = var.api_name&lt;/span&gt;

  &lt;span class="s"&gt;source_configuration {&lt;/span&gt;
    &lt;span class="s"&gt;auto_deployments_enabled = &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="s"&gt;authentication_configuration {&lt;/span&gt;
      &lt;span class="s"&gt;connection_arn = var.apprunner_connection_arn&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;

    &lt;span class="s"&gt;code_repository {&lt;/span&gt;
      &lt;span class="s"&gt;repository_url = var.github_repo&lt;/span&gt;
      &lt;span class="s"&gt;source_code_version {&lt;/span&gt;
        &lt;span class="s"&gt;type  = "BRANCH"&lt;/span&gt;
        &lt;span class="s"&gt;value = "master"&lt;/span&gt; &lt;span class="c1"&gt;# or whichever branch you are deploying&lt;/span&gt;
      &lt;span class="err"&gt;}&lt;/span&gt;
      &lt;span class="s"&gt;code_configuration {&lt;/span&gt;
        &lt;span class="s"&gt;configuration_source = "API"&lt;/span&gt;
        &lt;span class="s"&gt;code_configuration_values {&lt;/span&gt;
          &lt;span class="s"&gt;build_command = "npm install"&lt;/span&gt;
          &lt;span class="s"&gt;port          = "8000"&lt;/span&gt; &lt;span class="c1"&gt;# or the port your api runs on&lt;/span&gt;
          &lt;span class="s"&gt;runtime       = "NODEJS_16"&lt;/span&gt;
          &lt;span class="s"&gt;start_command = "npm run start"&lt;/span&gt;
          &lt;span class="s"&gt;runtime_environment_variables = {&lt;/span&gt;
            &lt;span class="s"&gt;PROD_DB_PORT          = "3306"&lt;/span&gt;
            &lt;span class="s"&gt;NODE_ENV              = "production"&lt;/span&gt;
            &lt;span class="s"&gt;PROD_DB_PASSWORD      = var.env_prod_db_password&lt;/span&gt;
            &lt;span class="s"&gt;PROD_DB_USERNAME      = var.env_prod_db_username&lt;/span&gt;
            &lt;span class="s"&gt;PROD_FRONTEND_URL     = var.env_prod_fe_url&lt;/span&gt;
            &lt;span class="s"&gt;# your other env variables&lt;/span&gt;
          &lt;span class="s"&gt;}&lt;/span&gt;
        &lt;span class="s"&gt;}&lt;/span&gt;
      &lt;span class="s"&gt;}&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;

  &lt;span class="s"&gt;network_configuration {&lt;/span&gt;
    &lt;span class="s"&gt;ingress_configuration {&lt;/span&gt;
      &lt;span class="s"&gt;is_publicly_accessible = &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;
    &lt;span class="s"&gt;egress_configuration {&lt;/span&gt;
      &lt;span class="s"&gt;egress_type       = "VPC"&lt;/span&gt;
      &lt;span class="s"&gt;vpc_connector_arn = var.vpc_connector_arn&lt;/span&gt;
    &lt;span class="s"&gt;}&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;

  &lt;span class="s"&gt;instance_configuration {&lt;/span&gt;
    &lt;span class="s"&gt;cpu    = "2048"&lt;/span&gt;  &lt;span class="c1"&gt;#2vCPU&lt;/span&gt;
    &lt;span class="s"&gt;memory = "4096"&lt;/span&gt;  &lt;span class="c1"&gt;#4GB&lt;/span&gt;
  &lt;span class="err"&gt;}&lt;/span&gt;

  &lt;span class="s"&gt;health_check_configuration {&lt;/span&gt;
    &lt;span class="s"&gt;interval = &lt;/span&gt;&lt;span class="m"&gt;10&lt;/span&gt;
    &lt;span class="s"&gt;timeout = &lt;/span&gt;&lt;span class="m"&gt;5&lt;/span&gt;
  &lt;span class="err"&gt;}&lt;/span&gt;

  &lt;span class="s"&gt;tags = {&lt;/span&gt;
    &lt;span class="s"&gt;DEPLOYED = "api-via-terraform"&lt;/span&gt;
  &lt;span class="s"&gt;}&lt;/span&gt;
&lt;span class="err"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;And that’s it!&lt;/strong&gt; Run the necessary &lt;code&gt;terraform&lt;/code&gt; commands with the scripts provided below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;terraform init&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;sh plan.sh&lt;/code&gt; for &lt;code&gt;terraform plan&lt;/code&gt; and review if necessary&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;sh deploy.sh&lt;/code&gt; for &lt;code&gt;terraform apply&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Before running these scripts, ensure to have the &lt;code&gt;.env&lt;/code&gt; file with all the necessary environment variables.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;code&gt;plan.sh&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="nb"&gt;export&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s1"&gt;'^#'&lt;/span&gt; .env | xargs&lt;span class="si"&gt;)&lt;/span&gt;

terraform plan &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"apprunner_connection_arn=&lt;/span&gt;&lt;span class="nv"&gt;$APPRUNNER_CONNECTION_ARN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"vpc_connector_arn=&lt;/span&gt;&lt;span class="nv"&gt;$VPC_CONNECTOR_ARN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"env_prod_db_username=&lt;/span&gt;&lt;span class="nv"&gt;$PROD_DB_USERNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"env_prod_db_password=&lt;/span&gt;&lt;span class="nv"&gt;$PROD_DB_PASSWORD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"env_prod_fe_url=&lt;/span&gt;&lt;span class="nv"&gt;$PROD_FRONTEND_URL&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="c"&gt;# any other environment variables your API needs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;deploy.sh&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="nb"&gt;export&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s1"&gt;'^#'&lt;/span&gt; .env | xargs&lt;span class="si"&gt;)&lt;/span&gt;

terraform apply &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"apprunner_connection_arn=&lt;/span&gt;&lt;span class="nv"&gt;$APPRUNNER_CONNECTION_ARN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"vpc_connector_arn=&lt;/span&gt;&lt;span class="nv"&gt;$VPC_CONNECTOR_ARN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"env_prod_db_username=&lt;/span&gt;&lt;span class="nv"&gt;$PROD_DB_USERNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"env_prod_db_password=&lt;/span&gt;&lt;span class="nv"&gt;$PROD_DB_PASSWORD&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-var&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"env_prod_fe_url=&lt;/span&gt;&lt;span class="nv"&gt;$PROD_FRONTEND_URL&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="c"&gt;# any other environment variables your API needs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After successful deployment, you should see a success message in the terminal with the following outputs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;api_service_url = “«value»“
api_service_arn = “«value»”
api_service_status = “«value»“
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  [Optional] Service Update
&lt;/h4&gt;

&lt;p&gt;For updates to the service, you can use &lt;code&gt;sh update.sh&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="nb"&gt;export&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="s1"&gt;'^#'&lt;/span&gt; ../.env | xargs&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="nv"&gt;API_SERVICE_ARN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;terraform output &lt;span class="nt"&gt;-raw&lt;/span&gt; api_service_arn&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Updates the App Runner service with the configuration in the input.json file&lt;/span&gt;
aws apprunner update-service &lt;span class="nt"&gt;--service-arn&lt;/span&gt; &lt;span class="nv"&gt;$API_SERVICE_ARN&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
    &lt;span class="nt"&gt;--cli-input-json&lt;/span&gt; file://input.json

&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"App Runner service has been updated"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, the &lt;code&gt;input.json&lt;/code&gt; file needed for the script needs to contain the JSON version of the entire service configuration (which seems like a pain tbh).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Is there a way to use a script similar to the above to update the service with the configuration updates only?🤔&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Possible Issues You Might Encounter
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;If you have existing app runner connections and VPC connectors, &lt;strong&gt;ensure they are in the new region you are deploying to&lt;/strong&gt;; these resources need to be located in the same region (your terraform script will throw an error otherwise).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After successful deployment, I ran into a &lt;strong&gt;database connection timeout error&lt;/strong&gt;. However, this was related to the error shared in my &lt;a href="https://khairahscorner.hashnode.dev/migrate-your-local-mysql-database-to-aws-rds#:~:text=However,%20if%20you%20run%20into%20errors" rel="noopener noreferrer"&gt;previous post&lt;/a&gt; and I resolved it by allowing inbound access to the database via the security group of the app runner service.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Thanks for reading,&lt;/strong&gt; and I hope you find it useful!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;P.S&lt;/strong&gt; If you have any suggestions on updating the service without rewriting the entire setup configurations, please let me know in the comments.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>apprunner</category>
      <category>terraform</category>
      <category>infrastructureascode</category>
      <category>aws</category>
    </item>
    <item>
      <title>Migrate Your Local MySQL Database to AWS RDS</title>
      <dc:creator>Airat Yusuff</dc:creator>
      <pubDate>Thu, 12 Sep 2024 11:00:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/migrate-your-local-mysql-database-to-aws-rds-3609</link>
      <guid>https://dev.to/aws-builders/migrate-your-local-mysql-database-to-aws-rds-3609</guid>
      <description>&lt;p&gt;In this post, I'll share detailed steps to migrate data in your local MySQL database to newly created instances on AWS RDS with MySQL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Background
&lt;/h3&gt;

&lt;p&gt;I worked on a &lt;a href="https://khairahscorner.hashnode.dev/deploying-nodejs-apps-on-aws-elastic-beanstalk-or-app-runner" rel="noopener noreferrer"&gt;full-stack project in 2023&lt;/a&gt; where I manually handled the production deployments of my database and backend to AWS. Earlier this year, I shut down all infrastructure because I was accumulating too much monthly costs with the database (no thanks to me overprovisioning).&lt;/p&gt;

&lt;p&gt;Now, fast forward to this month and I still had lots of AWS credits (&lt;a href="https://khairahscorner.hashnode.dev/5-months-in-the-cloud-my-aws-journey#heading-2-aws-community-builders-program" rel="noopener noreferrer"&gt;perks of being a Community Builder&lt;/a&gt;) due to expire by the end of the year. I tried to get my project back up only to realise it would not be as easy as I thought.&lt;/p&gt;

&lt;p&gt;First, I had not properly created a snapshot that I could use to restore the production data and kept getting access errors. I was also reminded of how excruciatingly manual the entire deployment had been, and how I did not document any of the steps I took (neither did I remember them).&lt;/p&gt;

&lt;p&gt;That was a lesson learnt, so this time, I am documenting the entire process. In line with one of the sayings in an Udemy course I never finished:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;'You have to know to perform a task manually before proceeding to automate it'.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Hence, I'll first share the steps I took to carry out the migration manually, and in a following post, I'll write scripts to provision the infrastructure with Terraform and also dump the local data into the remote db after successful launch.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;This is also part of my self-assigned ongoing project to learn more about CI/CD by building a pipeline to automate the entire deployment of the project.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  A) Manual Steps
&lt;/h3&gt;

&lt;h4&gt;
  
  
  1. Create a new DB instance on Amazon RDS
&lt;/h4&gt;

&lt;p&gt;This time, I was more intentional and practical with the configuration. Although it was a 'production deployment', I still opted for what's likely the cheapest running costs since it's not an actual live product (this also influenced some security options I chose not to add).&lt;/p&gt;

&lt;p&gt;I have summarised the options I chose in the '&lt;em&gt;Create Database&lt;/em&gt;' wizard below:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

AWS Region: eu-west-2
Database creation method: Standard Create
Engine type/edition/version: MySQL/MySQL Community/MySQL 8.0.35
Use case template: Free Tier
Credential settings: define these as you'd prefer
DB instance class: db.t3.micro
Storage: gp2, 20GB, auto-scaling enabled up to 100GB
Connectivity: 
    - define these for your specific use-case; I chose not to use an EC2
    - Public access: Yes, because I wanted to connect to the database locally via MySQL Workbench
Database authentication: Password auth
Additional configuration:
    - I created one database from here; but you can also leave it blank and create one when you get access via Workbench
Others: 
    - use default options or modify for your use-case
Deletion protection:
    - enabled (to dissuade myself from deleting easily like the last time)


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Estimated monthly costs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwakoalw0n53848a2sls.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwakoalw0n53848a2sls.png" alt="AWS RDS cost estimates"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  2. Connect to the RDS instance locally
&lt;/h4&gt;

&lt;p&gt;After the instance has successfully launched, use mysql client to connect to the instance. This assumes you have your MySQL server installed and running.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://dev.mysql.com/doc/mysql-getting-started/en/" rel="noopener noreferrer"&gt;Getting started with MySQL&lt;/a&gt;&lt;br&gt;
&lt;a href="https://dev.mysql.com/doc/refman/8.4/en/mysql.html" rel="noopener noreferrer"&gt;MySQL client&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You will also need the instance endpoint, and the username and password you defined in the credentials settings, to run this command:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

mysql -h sample_endpoint.rds.amazonaws.com -u username_sample -p



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Your instance would have been created with the correct inbound and outbound rules for the selected VPC security groups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;However,&lt;/strong&gt; if you run into errors connecting, confirm that your IP address is included in the allowed source for the inbound rules. I encountered a similar issue when I created the instance in a specific location with its IP address, and when I tried to connect with a different IP address in another location, the connection timed out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;[Optional] Create your database&lt;/strong&gt;&lt;br&gt;
If you skipped the &lt;code&gt;Additional configuration&lt;/code&gt; step during the instance creation, you can create one at this step:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

CREATE DATABASE sample_db;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
  
  
  3. Import data from your local database
&lt;/h4&gt;

&lt;p&gt;To do this, you first need to use the &lt;code&gt;mysqldump&lt;/code&gt; command to export the schema and data in your local db to a &lt;code&gt;.sql&lt;/code&gt; dump file:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

mysqldump -u root -p local_db_name &amp;gt; sample_dump.sql



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Note: If you run into errors while trying to export, check out my &lt;a href="https://stackoverflow.com/questions/77065652/mysqldump-command-returns-error-unknown-variable-local-infile-1" rel="noopener noreferrer"&gt;question on Stack Overflow&lt;/a&gt; and also &lt;a href="https://stackoverflow.com/a/77071802/11233049" rel="noopener noreferrer"&gt;an answer&lt;/a&gt; that could resolve some possible issues.&lt;/p&gt;

&lt;p&gt;Afterwards, use the &lt;code&gt;mysql&lt;/code&gt; command to import the dump file to your RDS instance:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;mysql -h sample_endpoint.rds.amazonaws.com -u username_sample -p &lt;br&gt;
sample_db &amp;lt; sample_dump.sql&lt;/p&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h4&gt;
&lt;br&gt;
  &lt;br&gt;
  

&lt;ol&gt;
&lt;li&gt;[Optional] Connect to your RDS instance via MySQL Workbench
&lt;/li&gt;
&lt;/ol&gt;
&lt;/h4&gt;


&lt;p&gt;Confirm that you are able to connect to the instance without issues, and check the schema and data inside the database to confirm that they were also imported correctly.&lt;/p&gt;

&lt;h4&gt;
  
  
  Unable to connect?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Ensure you have correctly configured the public access settings, or&lt;/li&gt;
&lt;li&gt;Use an EC2 instance to securely connect to the database and run your sql commands through the instance. Ensure that the EC2 instance is also inside the VPC where the RDS instance is located.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Next up will be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;deploying to App Runner with the required database credentials from this post (and necessary infrastructure access),&lt;/li&gt;
&lt;li&gt;scripting with Terraform instead, and&lt;/li&gt;
&lt;li&gt;creating a CI/CD pipeline to update future backend and frontend updates.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>mysql</category>
      <category>rds</category>
    </item>
    <item>
      <title>AWS CDK For Noobs: Deploying NextJS Apps</title>
      <dc:creator>Airat Yusuff</dc:creator>
      <pubDate>Wed, 24 Jan 2024 08:41:22 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-cdk-for-noobs-deploying-nextjs-apps-4oii</link>
      <guid>https://dev.to/aws-builders/aws-cdk-for-noobs-deploying-nextjs-apps-4oii</guid>
      <description>&lt;p&gt;As a software developer, have you ever wanted to deploy your personal projects to AWS instead of PaaS providers like Netlify or Heroku, but do not want to go through setting up everything manually through the AWS console or learning to write infrastructure-as-code?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Then, you are the target audience for AWS Cloud Development Kit (CDK).&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This post is part of my commitment to &lt;em&gt;&lt;a href="https://hopin.com/events/wwcode-days-of-code/registration" rel="noopener noreferrer"&gt;Women Who Code Days of Code challenge&lt;/a&gt;&lt;/em&gt;✨&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What is AWS CDK?
&lt;/h2&gt;

&lt;p&gt;It is an AWS development framework for provisioning cloud infrastructure using programming languages &lt;strong&gt;&lt;em&gt;familiar&lt;/em&gt;&lt;/strong&gt; to software developers. &lt;/p&gt;

&lt;p&gt;Simply put, it's a tool for software developers to deploy their apps onto AWS without manually setting up resources through the console, or writing declarative/imperative infrastructure-as-code.&lt;/p&gt;

&lt;p&gt;In this post, I'll share all about my first experience with CDK and how I eventually deployed the most basic NextJS app.&lt;/p&gt;

&lt;h3&gt;
  
  
  First Attempt
&lt;/h3&gt;

&lt;p&gt;I first tried it last month by starting with the free &lt;a href="https://explore.skillbuilder.aws/learn/course/1475/aws-cloud-development-kit-primer" rel="noopener noreferrer"&gt;AWS CDK Primer course&lt;/a&gt; on Skills Builder; however, the course mostly covered the theory basics and a broad overview of how it may be used. There was not much insights on how it could be used for practical projects e.g deploying a full-stack app (which is understandable, since it was a &lt;strong&gt;&lt;em&gt;free&lt;/em&gt;&lt;/strong&gt; primer course).&lt;/p&gt;

&lt;p&gt;Then, I tried using the &lt;a href="https://cloudacademy.com/learning-paths/aws-cdk-v2-7518/" rel="noopener noreferrer"&gt;learning path on Cloud Academy&lt;/a&gt; but it quickly became disengaging since the contents were focused on Python (hence a different version of the API docs) and I preferred Typescript. Overall, I found myself quite lost, trying to figure out the "right" thing to do according to the docs, and if there was only one way to do it. Meanwhile, every other tutorial (even the basic ones) seemed to already know their way around navigating the necessary construct libraries.&lt;/p&gt;

&lt;p&gt;I eventually tried &lt;a href="https://www.youtube.com/watch?v=YL2feD9ws9k" rel="noopener noreferrer"&gt;this tutorial&lt;/a&gt; to get something done (at least) and was able to deploy a basic NextJS app.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to set up a CDK app for deploying a NextJS app to AWS Amplify
&lt;/h2&gt;

&lt;p&gt;This assumes you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;installed the AWS CDK toolkit,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;installed Typescript (somehow, this was my first time doing this!),&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;configured your AWS credentials, and&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;generated a GitHub PAT and saved to AWS secrets manager.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;If necessary, watch the linked tutorial for all preliminary setup.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  1) Create an empty folder (&lt;em&gt;e.g cdk-next-app&lt;/em&gt;) and set up a CDK app
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;code&gt;cdk init app --language typescript&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The &lt;em&gt;'app'&lt;/em&gt; template sets up a basic CDK app in &lt;em&gt;language&lt;/em&gt; Typescript.&lt;/p&gt;

&lt;h3&gt;
  
  
  2) Set up your preferred environment in &lt;code&gt;bin/cdk-next-app.ts&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;This is optional but I personally think it might be best to specify and avoid the 'environment agnostic' situation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY

COPY
import * as cdk from 'aws-cdk-lib';
import { CdkNextAppStack } from '../lib/cdk-next-app-stack';

const app = new cdk.App();
new CdkNextAppStack(app, 'CdkNextAppStack', {
  env: { account: process.env.CDK_DEFAULT_ACCOUNT, region: process.env.CDK_DEFAULT_REGION },
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This creates your CDK app and within it, a stack which will have all of the constructs needed to provision infrastructure for the Next app and deploy it to AWS Amplify.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Check out the &lt;a href="https://explore.skillbuilder.aws/learn/course/1475/aws-cloud-development-kit-primer" rel="noopener noreferrer"&gt;Primer course&lt;/a&gt; for AWS CDK basics.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  3) Set up the stack in &lt;code&gt;lib/cdk-next-app-stack.ts&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; This uses &lt;code&gt;@aws-cdk/aws-amplify-alpha&lt;/code&gt;, which is basically the construct library with experimental features for Amplify using CDK. The current stable version does not have L2 constructs (constructs for provisioning AWS resources) yet, so the option was either the experimental library or setting everything up using L1 constructs (which I would have to look into👀).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Comments have been added to briefly explain the codes.&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;COPY

COPY
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import {
  App,
  GitHubSourceCodeProvider,
  Platform,
  RedirectStatus
} from "@aws-cdk/aws-amplify-alpha";
import * as codebuild from 'aws-cdk-lib/aws-codebuild';

export class CdkNextAppStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    //create a new Amplify app
    const amplifyApp = new App(this, 'NextApp', {
      //set a code source provider for the app to be deployed e.g GitHub
      sourceCodeProvider: new GitHubSourceCodeProvider({
        //requires all three properties
        owner: 'khairahscorner',
        repository: 'sample-next-app',
        oauthToken: cdk.SecretValue.secretsManager('github-token-cdk'),
      }),
      autoBranchDeletion: true,
      platform: Platform.WEB_COMPUTE, //required for SSR apps 
      customRules: [{ //used to avoid page errors by redirecting them
        source: '/&amp;lt;*&amp;gt;',
        target: '/index.html',
        status: RedirectStatus.NOT_FOUND_REWRITE
      }],
      buildSpec: codebuild.BuildSpec.fromObjectToYaml({
          // Alternatively add a `amplify.yml` to the repo
          version: '1.0',
          frontend: {
            phases: {
              preBuild: {
                commands: ['npm ci'],
              },
              build: {
                commands: ['npm run build'],
              },
            },
            artifacts: {
              baseDirectory: '.next',
              files: ['**/*'],
            },
            // save time on npm reinstalling??
            cache: {
              paths: ['node_modules/**/*'],
            }
          }
      }),
    })

    // branch to deploy
    let branchName = 'main';
    amplifyApp.addBranch(branchName)

    //outputs the url to both your console and Outputs in the CF stack
    new cdk.CfnOutput(this, 'AmplifyAppUrl', {
      value: `https://${branchName}.${amplifyApp.appId}.amplifyapp.com`,
      description: 'Amplify App URL',
    });
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4) (Optional) Run &lt;code&gt;cdk bootstrap&lt;/code&gt; and &lt;code&gt;cdk synth&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;cdk bootstrap&lt;/code&gt; is used to setup necessary resources required to store and manage the deployment artifacts for your CDK app. Run this if it's your first time using AWS CDK, else it uses the existing setup on your AWS account.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cdk synth&lt;/code&gt; generates the AWS CloudFormation template version of your code, just to have a look through.&lt;/p&gt;

&lt;h3&gt;
  
  
  5) Deploy your app with &lt;code&gt;cdk deploy&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk09v9jzgpop60axq7jqk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk09v9jzgpop60axq7jqk.png" alt="CDK deploy terminal message" width="800" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;That's it!&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gl80cc7qu1xam4jsl6r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1gl80cc7qu1xam4jsl6r.png" alt="Next app deployed with AWS CDK" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pitfalls to Avoid
&lt;/h2&gt;

&lt;h4&gt;
  
  
  1) Ensure you have installed and authorised the AWS Amplify GitHub App.
&lt;/h4&gt;

&lt;p&gt;I previously installed and authorised it for only one repository, so I encountered a permissions error and eventually fixed it by authorising the current repository.&lt;/p&gt;

&lt;h4&gt;
  
  
  2) AWS Amplify seems to run with node v16 (with support up to v18.13).
&lt;/h4&gt;

&lt;p&gt;However, Next14 requires a minimum of node v18.17, hence I was getting this error despite trying different fixes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx84tx32s2e3sqiqj7578.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx84tx32s2e3sqiqj7578.png" alt="Next14 error with Amplify" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the end, I downgraded to next13 and it worked, but do let me know if you know of a workaround for next14.&lt;/p&gt;

&lt;h2&gt;
  
  
  Up Next
&lt;/h2&gt;

&lt;p&gt;I'll be trying more sample app deployments with CDK and maybe even explore &lt;a href="https://github.com/hashicorp/terraform-cdk" rel="noopener noreferrer"&gt;CDK for Terraform&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;&lt;em&gt;Thanks for reading!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cdk</category>
      <category>nextjs</category>
      <category>deployment</category>
    </item>
    <item>
      <title>Generative AI For Noobs: Learn With AWS PartyRock!</title>
      <dc:creator>Airat Yusuff</dc:creator>
      <pubDate>Thu, 16 Nov 2023 20:44:15 +0000</pubDate>
      <link>https://dev.to/aws-builders/generative-ai-for-noobs-learn-with-aws-partyrock-33ea</link>
      <guid>https://dev.to/aws-builders/generative-ai-for-noobs-learn-with-aws-partyrock-33ea</guid>
      <description>&lt;p&gt;Whether you work in tech or are relatively new to the ecosystem, Generative AI is a buzzword you have most likely come across, especially with tools with &lt;strong&gt;ChatGPT&lt;/strong&gt;. You may not even understand what it is about but now you can!&lt;/p&gt;

&lt;p&gt;AWS PartyRock just launched and I personally think it's a great way for &lt;strong&gt;anyone &lt;em&gt;(literally)&lt;/em&gt;&lt;/strong&gt; to learn more about generative AI and AI-powered apps in general. In this post, I share how I used PartyRock to build an app for one of my hobbies: &lt;em&gt;curating skincare routines!&lt;/em&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://partyrock.aws/u/airahyusuff/t1zeE2t5j/Skincare-Routine-Builder" rel="noopener noreferrer"&gt;The app I built.&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But first,&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS PartyRock?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;PartyRock, an Amazon Bedrock Playground&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://partyrock.aws/" rel="noopener noreferrer"&gt;PartyRock&lt;/a&gt; is a playground for building generative AI-powered apps, &lt;strong&gt;ZERO coding involved&lt;/strong&gt;. With PartyRock, you can build your own AI app within minutes and share it with the world!&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;This was me 2 days ago!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;All you have to do is describe what you want to build as your app, and PartyRock does the rest.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnh7cz1ikbldxy841l1t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcnh7cz1ikbldxy841l1t.png" alt="home page of PartyRock by AWS, an Amazon Bedrock Playground"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Being a playground, you get to learn the fundamentals of generative AI in a fun way by building silly (or not-so-silly) apps and experimenting hands-on with prompt engineering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Note:&lt;/em&gt;&lt;/strong&gt; This is not a guide to learn about Generative AI or prompt engineering, so I'll just include links to resources on them if you want to learn more.&lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-engineering-guidelines.html" rel="noopener noreferrer"&gt;Prompt Engineering on AWS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://research.ibm.com/blog/what-is-generative-AI" rel="noopener noreferrer"&gt;What is Generative AI?&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How to Get Started With PartyRock
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1.&lt;/strong&gt; Visit the &lt;a href="https://partyrock.aws/" rel="noopener noreferrer"&gt;website&lt;/a&gt; and sign up for an account. &lt;br&gt;
You can use your Google, Apple or Amazon account.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;No, this account is not related to your AWS account, if you have one.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz08fh634uu4whkcyizwu.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz08fh634uu4whkcyizwu.gif" alt="Get started with PartyRock by AWS, an Amazon Bedrock Playground"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;P.S.&lt;/strong&gt; I really like the UI! It gives off a playful feel (which is spot-on for its purpose).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2.&lt;/strong&gt; Once your account is created, simply get started! You can either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;click on the &lt;em&gt;Build&lt;/em&gt; button on the home page,&lt;/li&gt;
&lt;li&gt;Scroll to the bottom and describe your app with &lt;em&gt;'Let's Build'&lt;/em&gt;, or&lt;/li&gt;
&lt;li&gt;Or go to &lt;em&gt;'My Apps'&lt;/em&gt; and click to generate your new app.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fig1ujbcsq85tc276h0fj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fig1ujbcsq85tc276h0fj.png" alt="PartyRock by AWS, an Amazon Bedrock Playground"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also remix any of the listed apps if you do not want to start from scratch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3.&lt;/strong&gt; Describe your app in 2-3 sentences (or as many details as you can). For mine, I gave some context and what I wanted the app to do.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;This gets you started on practising prompt engineering.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hsy4an3tg7qhay64wys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3hsy4an3tg7qhay64wys.png" alt="Describe your app in app builder - Partyrock by AWS"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Voila! Your app is ready to go!&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft65cxqotdk2gtnrxqxws.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft65cxqotdk2gtnrxqxws.png" alt="Skincare routine builder - an AWS PartyRock app (Generative AI apps)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. (Optional) Tweak your app&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There are several ways to make your app better. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First, you can try resizing the widgets to get a better layout. This was my final layout after resizing and moving the widgets around.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5p6prsqsy82bposhoot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5p6prsqsy82bposhoot.png" alt="Edit widgets - building AI apps with PartyRock by AWS, an Amazon Bedrock Playground"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You can also add new widgets and edit existing widgets. There are different kinds of widgets as shown below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5g97f1lqypgynifultx5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5g97f1lqypgynifultx5.png" alt="Available widgets for building AI apps with PartyRock by AWS, an Amazon Bedrock Playground"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;And most importantly,&lt;/strong&gt; you can edit the AI-powered widgets (the ones that show the results of your app) to improve your prompt engineering skills.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I modified mine to include more details and be more specific with some parts of the responses. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6ryu3m0bhvxqucptr7r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe6ryu3m0bhvxqucptr7r.png" alt="Prompt engineering - using PartyRock by AWS, an Amazon Bedrock Playground"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8fs8puocwcve9iqde7z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp8fs8puocwcve9iqde7z.png" alt="Prompt engineering - using PartyRock by AWS, an Amazon Bedrock Playground"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rw2x7yg4gsz8xdsc31g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1rw2x7yg4gsz8xdsc31g.png" alt="Prompt engineering - using PartyRock by AWS, an Amazon Bedrock Playground"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Result&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvsp8pvh4d6lybis2w5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsvsp8pvh4d6lybis2w5u.png" alt="Skincare routine builder - an AWS PartyRock app (Generative AI apps)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also played around with the models and it was interesting to see how they all presented different responses for the part of the prompts that had to do with text formatting.&lt;/p&gt;

&lt;p&gt;E.g., &lt;strong&gt;&lt;em&gt;Claude Instant&lt;/em&gt;&lt;/strong&gt; sometimes presented the links the way you’d expect (as shown below) but other times it did not. &lt;strong&gt;&lt;em&gt;Claude&lt;/em&gt;&lt;/strong&gt; almost always presented links plainly and the Jurassic models ignored all text formatting prompts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rzgu3gp3o4gp5mw6wv6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rzgu3gp3o4gp5mw6wv6.png" alt="using Claude Instant model for PartyRock by AWS, an Amazon Bedrock Playground"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Claude&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93bpvbinrypoie12ln78.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93bpvbinrypoie12ln78.png" alt="using Claude model for PartyRock by AWS, an Amazon Bedrock Playground"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Jurassic-2 Mid&lt;/strong&gt;
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gwwyy337x2pb9rhijj5.png" alt="using Claude model for PartyRock by AWS, an Amazon Bedrock Playground"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that gives a summary of what you can do with PartyRock.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Check out my app &lt;a href="https://partyrock.aws/u/airahyusuff/t1zeE2t5j/Skincare-Routine-Builder" rel="noopener noreferrer"&gt;here&lt;/a&gt;!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46r35g1skc3p9sbkpfh7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46r35g1skc3p9sbkpfh7.png" alt="Skincare routine builder - an AWS PartyRock app (Generative AI apps)"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Now, it's your turn!&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Try building yourself an app that does anything you can imagine - it could be a daily motivational quotes app or any other random idea you have!&lt;/p&gt;

&lt;h2&gt;
  
  
  A Question About PartyRock
&lt;/h2&gt;

&lt;p&gt;1) No, PartyRock is not an AWS "&lt;strong&gt;&lt;em&gt;product&lt;/em&gt;&lt;/strong&gt;". It is simply a playground for you to learn more about generative AI and work on your prompt engineering skills. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;What better way to do that than to learn hands-on by building apps?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;However, it is powered by the foundational models available on &lt;a href="https://aws.amazon.com/bedrock/" rel="noopener noreferrer"&gt;Amazon Bedrock&lt;/a&gt;, so you also get to use some of these models and observe how they work before moving over to utilise them in Bedrock.&lt;/p&gt;

&lt;p&gt;To learn more about PartyRock and how to use it, check out the following resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://partyrock.aws/guide/getStarted" rel="noopener noreferrer"&gt;Getting started with PartyRock&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://partyrock.aws/guide/building" rel="noopener noreferrer"&gt;How to build an app with PartyRock&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Got questions? &lt;a href="https://partyrock.aws/guide/faq" rel="noopener noreferrer"&gt;Read the FAQs&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Thank you for reading and I hope it was a fun one for you. &lt;/p&gt;

&lt;p&gt;Like PartyRock says, &lt;strong&gt;&lt;em&gt;anyone can build AI apps&lt;/em&gt;&lt;/strong&gt; so, happy building!🎉 &lt;/p&gt;

</description>
      <category>aws</category>
      <category>partyrock</category>
      <category>bedrock</category>
      <category>generativeai</category>
    </item>
    <item>
      <title>Deploying Node.js Apps on AWS: Elastic Beanstalk Or App Runner?</title>
      <dc:creator>Airat Yusuff</dc:creator>
      <pubDate>Wed, 25 Oct 2023 09:24:55 +0000</pubDate>
      <link>https://dev.to/aws-builders/deploying-nodejs-apps-on-aws-elastic-beanstalk-or-app-runner-45am</link>
      <guid>https://dev.to/aws-builders/deploying-nodejs-apps-on-aws-elastic-beanstalk-or-app-runner-45am</guid>
      <description>&lt;p&gt;AWS offers over &lt;a href="https://www.aboutamazon.com/what-we-do/amazon-web-services#:~:text=AWS%20has%20over%20200%20fully,%2C%20industries%2C%20and%20use%20cases." rel="noopener noreferrer"&gt;200 services&lt;/a&gt;, with each of them catering to different use cases. There are at least 5 of them for deploying web applications and API services, so it can be difficult to decide which service is best suited for your use case.&lt;/p&gt;

&lt;p&gt;I recently tried to build an API using Node.js and I wanted to deploy using AWS &lt;strong&gt;&lt;a href="https://aws.amazon.com/elasticbeanstalk/" rel="noopener noreferrer"&gt;Elastic Beanstalk&lt;/a&gt;&lt;/strong&gt; - it is often the recommended choice for &lt;a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs.html" rel="noopener noreferrer"&gt;deploying Node.js applications&lt;/a&gt; and I was also familiar with it.&lt;br&gt;
However, I soon realised that it was not the best choice for my application and had to find an alternative, which led me to &lt;strong&gt;&lt;a href="https://aws.amazon.com/apprunner/" rel="noopener noreferrer"&gt;App Runner&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Although App Runner was more suited for my use case, time and effort were already wasted, so I decided to write this article to possibly prevent you (and future me!) from dealing with the same scenario.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This post&lt;/strong&gt; contains a brief overview of the API, how it was deployed to AWS Elastic Beanstalk, the issues encountered that led to using App Runner, and how it was successfully deployed. It wraps up with a summary of how to choose the better option for your use case.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;a href="https://aws.amazon.com/getting-started/decision-guides/containers-on-aws-how-to-choose/" rel="noopener noreferrer"&gt;This AWS article&lt;/a&gt;&lt;/strong&gt; on choosing the right container service is also a good place to start when making this decision. Both App Runner and Elastic Beanstalk are provisioning services, so they handle the orchestration layer complexities and are optimised for ease of use.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ful5499dm8h7ue8dmdbks.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ful5499dm8h7ue8dmdbks.jpeg" alt="AWS container services for provisioning, orchestration and capacity" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;The API was built using &lt;strong&gt;&lt;a href="https://expressjs.com/" rel="noopener noreferrer"&gt;Express.js&lt;/a&gt;&lt;/strong&gt; and integrated with a MySQL database using &lt;strong&gt;&lt;a href="https://sequelize.org/" rel="noopener noreferrer"&gt;Sequelize&lt;/a&gt;&lt;/strong&gt;. This meant I needed to deploy both the database and API, which was not a problem.&lt;/p&gt;

&lt;p&gt;First, I created a database on AWS RDS using the MySQL engine, added the credentials to connect to my API, switched to the production configuration, and synced the database. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fia7h8c3n128ev374rqn6.JPEG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fia7h8c3n128ev374rqn6.JPEG" alt="Nodejs API configuration with MySQL database" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next was deploying the API.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Elastic Beanstalk
&lt;/h3&gt;

&lt;p&gt;Elastic Beanstalk is a fully managed service for deploying and managing web applications, with support for containers. I deployed to Beanstalk using the following steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/GettingStarted.CreateApp.html" rel="noopener noreferrer"&gt;For Elastic Beanstalk&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Issues Encountered
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1) Error connecting to the database&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Although the deployment was successful, I kept getting &lt;strong&gt;&lt;em&gt;"502 Bad Gateway"&lt;/em&gt;&lt;/strong&gt; errors when accessing any endpoint. I checked the logs and it showed some &lt;code&gt;ETIMEDOUT&lt;/code&gt; errors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Create a security group with inbound rules for your database (in my case, MySQL) to allow traffic from the Beanstalk environment (via its security group) into the database.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;The full solution can be found &lt;a href="https://stackoverflow.com/a/63782474/11233049" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhtwrq4zpym7q0edeqlv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhhtwrq4zpym7q0edeqlv.png" alt="Fixing 503 etimedout error for AWS elastic beanstalk when connecting to database" width="800" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This error also occurred with App Runner but the resolution required an extra step, as described later below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2) Mixed Content errors&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This was the main limitation that caused the switch.&lt;/p&gt;

&lt;p&gt;As you might expect, the project was full-stack and the web app was deployed to Netlify, which deploys to HTTPS but &lt;strong&gt;Beanstalk primarily deploys applications to HTTP&lt;/strong&gt;. This resulted in &lt;strong&gt;&lt;a href="https://developer.mozilla.org/en-US/docs/Web/Security/Mixed_content" rel="noopener noreferrer"&gt;Mixed Content&lt;/a&gt;&lt;/strong&gt; errors whenever a request was sent to the API from the front-end.&lt;/p&gt;

&lt;p&gt;To resolve the issue, a custom domain is needed to &lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html" rel="noopener noreferrer"&gt;configure the Beanstalk environment with HTTPS&lt;/a&gt;&lt;/strong&gt;. However, since I did not want to purchase one, I attempted to use a load balancer, where the balancer terminates the HTTPS connection before getting to the instance and listens for the request on HTTPS with a self-signed certificate.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Resources used for the solution can be found at the bottom of &lt;a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html#:~:text=Topics" rel="noopener noreferrer"&gt;this article&lt;/a&gt;.&lt;/em&gt;&lt;/strong&gt; &lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpmcvz21tkfmxtf3x5rq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbpmcvz21tkfmxtf3x5rq.png" alt="how to configure elastic beanstalk to use https" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The fix worked and I was able to get rid of the Mixed Content error. &lt;strong&gt;However&lt;/strong&gt;, this flags the frontend connection as non-secure because the self-signed security certificate on the API is not trusted by Chrome.&lt;/p&gt;

&lt;p&gt;It was at this point I decided to explore other services as I had already spent too much time creating a workaround for Beanstalk.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;P.S. Initially, I deployed the web app to &lt;a href="https://aws.amazon.com/amplify/" rel="noopener noreferrer"&gt;AWS Amplify&lt;/a&gt; but eventually switched to Netlify because, unlike Amplify, it allows editing of the URL of the deployed app without having to use a &lt;a href="https://docs.aws.amazon.com/amplify/latest/userguide/custom-domains.html" rel="noopener noreferrer"&gt;custom domain&lt;/a&gt;.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;After doing some research and learning that App Runner deploys applications to HTTPS, I decided to use it instead.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS App Runner
&lt;/h3&gt;

&lt;p&gt;App Runner is a fully managed service for deploying web applications and API services easily by connecting your source code, containerising and then deploying. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It offers the simplest approach&lt;/strong&gt; to deploying these applications without having to manage the infrastructure.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;&lt;a href="https://docs.aws.amazon.com/apprunner/latest/dg/getting-started.html" rel="noopener noreferrer"&gt;Deploying with App Runner&lt;/a&gt;&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Extra Fix for Issue Encountered with Database
&lt;/h3&gt;

&lt;p&gt;When creating the service, under &lt;strong&gt;"Networking"&lt;/strong&gt; configurations, configure the service to use &lt;strong&gt;"custom VPC"&lt;/strong&gt; for outgoing traffic and create a new VPC Connector.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5gshd6i1eiqytskdouj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5gshd6i1eiqytskdouj.png" alt="Networking configuration for new AWS App runner service" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Ensure that the security group previously created for database access is included in the security groups for the VPC Connector. It should be configured with the default security group for the custom VPC being used for the service or opened up for all IP addresses (&lt;em&gt;not recommended&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn862hfkp8rzys2fkgvg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn862hfkp8rzys2fkgvg.png" alt="Adding VPC connector for new AWS App Runner service" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Both services can be used to deploy Node.js APIs and each has its pros and cons.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;Linking &lt;a href="https://aws.amazon.com/getting-started/decision-guides/containers-on-aws-how-to-choose/" rel="noopener noreferrer"&gt;the article &lt;/a&gt;on choosing the right AWS container service again.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;1) If you want the simplest approach,&lt;/strong&gt; App Runner is the choice for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2) If you have a custom domain for your API&lt;/strong&gt;, you can try Elastic Beanstalk. If not, App Runner is a better choice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3) If your application does not need regular updates,&lt;/strong&gt; Elastic Beanstalk is still good to go. However, if you regularly make updates to your application, App Runner is a more suited choice.&lt;/p&gt;

&lt;p&gt;This is because new updates are deployed on Beanstalk by uploading the updated &lt;code&gt;.zip&lt;/code&gt; version of the application and switching. As far as I know (&lt;em&gt;&lt;strong&gt;please correct me if I'm wrong&lt;/strong&gt;&lt;/em&gt;), this is how updates are typically made and the process is too manual in my opinion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On the other hand,&lt;/strong&gt; App Runner connects to a GitHub repository (or image registry) and can be configured for automatic redeployment when new updates are pushed to the repository.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Best for AWS Elastic Beanstalk&lt;/strong&gt;&lt;br&gt;
1) Custom domain available&lt;br&gt;
2) App is not regularly updated.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for AWS App Runner&lt;/strong&gt;&lt;br&gt;
1) Simplest solution to use without any prior knowledge&lt;br&gt;
2) no custom domain for API&lt;br&gt;
3) app is regularly synced with new changes.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;Thank you for reading and I hope you find it useful!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>node</category>
      <category>deployment</category>
      <category>api</category>
    </item>
    <item>
      <title>Convert AWS CloudFormation Template For a Highly Available Web Server To Terraform Script</title>
      <dc:creator>Airat Yusuff</dc:creator>
      <pubDate>Tue, 31 Jan 2023 10:17:26 +0000</pubDate>
      <link>https://dev.to/aws-builders/convert-aws-cloudformation-template-for-a-highly-available-web-server-to-terraform-script-lhf</link>
      <guid>https://dev.to/aws-builders/convert-aws-cloudformation-template-for-a-highly-available-web-server-to-terraform-script-lhf</guid>
      <description>&lt;p&gt;I tried out Terraform for the first time in one of my &lt;a href="https://dev.to/aws-builders/working-with-terraform-as-a-noob-a-brief-overview-5a8a"&gt;&lt;strong&gt;previous posts&lt;/strong&gt;&lt;/a&gt; and since nothing beats learning by continuously doing, I decided to convert a CloudFormation script I wrote to "deploy infrastructure for highly available applications" to its Terraform version.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Remember &lt;a href="https://khairahscorner.hashnode.dev/week-3-writing-cloudformation-scripts" rel="noopener noreferrer"&gt;this post&lt;/a&gt;? Yeah, the CloudFormation script from this project.&lt;br&gt;
The project was mainly to understand (and practice) what to consider when deciding between high availability of applications vs. optimising costs, and how that translates to the architecture.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites&lt;/strong&gt;: Have Terraform Open Source installed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps to Convert CloudFormation templates to Terraform scripts
&lt;/h2&gt;

&lt;p&gt;There are three main steps after creating and writing the configurations but I'll be highlighting every other step as well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Structure the folder
&lt;/h3&gt;

&lt;p&gt;The project uses the &lt;code&gt;modules&lt;/code&gt;/folder structure because real-world projects will typically be structured that way. I created sub-folders like &lt;code&gt;setup&lt;/code&gt;, &lt;code&gt;security&lt;/code&gt;, etc to create separate files with the configurations for each section.&lt;/p&gt;

&lt;p&gt;As highlighted above, each folder had at least its &lt;code&gt;main.tf&lt;/code&gt;, &lt;code&gt;outputs.tf&lt;/code&gt; and &lt;code&gt;variables.tf&lt;/code&gt; files.&lt;/p&gt;

&lt;h3&gt;
  
  
  Convert each resource declaration to terraform syntax
&lt;/h3&gt;

&lt;p&gt;This is where I spent the most time; there are several resources provided by the Terraform team to learn about different aspects of the tool; language syntax, CLI commands, providers' documentation, etc. I also encountered several errors and got some clarifications from reading the documentation.&lt;/p&gt;

&lt;p&gt;The final configuration files can be found in &lt;a href="https://github.com/khairahscorner/from-cloudformation-to-terraform" rel="noopener noreferrer"&gt;this repo&lt;/a&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://github.com/khairahscorner/CloudFormation-Udacity-Project" rel="noopener noreferrer"&gt;Link to the CF template&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And now to the "main" steps:&lt;/p&gt;

&lt;h3&gt;
  
  
  Initialise the working directory
&lt;/h3&gt;

&lt;p&gt;The directory that contains the configuration files defined above must be "initialised" before Terraform commands can be used to perform any operation. This is done with &lt;code&gt;terraform init&lt;/code&gt; and this pretty much sets up the folder with all the necessary configurations Terraform needs.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform plan&lt;/code&gt; and &lt;code&gt;terraform apply&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The plan command generates a preview of the updates to be made (this can be saved in &lt;code&gt;.tfplan&lt;/code&gt; file.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;P.S&lt;/strong&gt; it can be saved in files other than the default tfplan but those require extra configurations that may be unnecessary.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;One of the issues I encountered during this step was associating route tables with subnets. Unlike AWS CloudFormation where I had to create each subnet individually (hence having direct access to the individual &lt;code&gt;subnet_id&lt;/code&gt;), Terraform provides &lt;strong&gt;meta-arguments&lt;/strong&gt; like &lt;code&gt;for_each&lt;/code&gt; that can be used to create similar subnets with just one configuration.&lt;/p&gt;

&lt;p&gt;However, it becomes difficult to directly retrieve the &lt;code&gt;subnet_id&lt;/code&gt;s and use them in another &lt;code&gt;for_each&lt;/code&gt; for creating the matching route-table associations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6q4uycjanqbhn9576fe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6q4uycjanqbhn9576fe.png" alt="Error message" width="800" height="165"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As suggested, a workaround would be to use the &lt;code&gt;-target&lt;/code&gt; option to apply just the &lt;code&gt;setup&lt;/code&gt; module before proceeding. This is not recommended as highlighted in the docs so I created the resources individually instead since the values I needed could not be defined statically beforehand. That did not seem efficient so there would most likely be another way.&lt;/p&gt;

&lt;h4&gt;
  
  
  Workaround #1:
&lt;/h4&gt;

&lt;p&gt;&lt;code&gt;terraform plan -out=tfplan -target=module.setup&lt;br&gt;
terraform apply tfplan -target=module.setup&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Workaround #2:
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed6z1ul33sqctxra8wbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed6z1ul33sqctxra8wbf.png" alt="Workaround for using for_each" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please let me know if you have another way of doing it!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Afterwards, run the commands as normal, and test:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;terraform plan -out=tfplan&lt;br&gt;
terraform apply tfplan&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't forget to delete your resources after you are done!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The command &lt;code&gt;terraform destroy&lt;/code&gt; is the equivalent of deleting CloudFormation stacks.&lt;/p&gt;

&lt;p&gt;The deployment can also be automated using a CI/CD tool so that's what I'd be trying out next!&lt;/p&gt;

</description>
      <category>web3</category>
      <category>crypto</category>
      <category>webdev</category>
      <category>offers</category>
    </item>
    <item>
      <title>Using ChatGPT to Deploy Backend APIs to AWS EKS</title>
      <dc:creator>Airat Yusuff</dc:creator>
      <pubDate>Tue, 17 Jan 2023 18:34:08 +0000</pubDate>
      <link>https://dev.to/aws-builders/using-chatgpt-to-deploy-backend-apis-to-aws-eks-2nkj</link>
      <guid>https://dev.to/aws-builders/using-chatgpt-to-deploy-backend-apis-to-aws-eks-2nkj</guid>
      <description>&lt;p&gt;If someone had told me to use ChatGPT to practice cloud projects at a faster pace or without feeling too much like a fraud, I would probably have dismissed it😅.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://openai.com/blog/chatgpt/" rel="noopener noreferrer"&gt;ChatGPT&lt;/a&gt; has been a hot topic for some time now so I decided to try it out as well. I had not fully grasped the concept of Kubernetes, and how AWS EKS was supposed to work, so I figured I could try the bot instead of the gazillion tutorials that just seem to confuse me more!&lt;/p&gt;

&lt;p&gt;I asked ChatGPT (twice) for steps on how to deploy a full-stack app to AWS EKS and I got slightly similar answers on both occasions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbitxsb7qfd8kw7ov927e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbitxsb7qfd8kw7ov927e.png" alt="Image description" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygnkcy3b14sjzvm8448w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygnkcy3b14sjzvm8448w.png" alt="Image description" width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The responses provided a base for me to start and I was able to learn more about Kubernetes; what namespaces were, services, deployments of pods to nodes, etc.&lt;/p&gt;

&lt;p&gt;In the end, I was able to work on a project that deploys three containerised apps into one EKS cluster: two in the same pod, and one in a different pod; both in one node group with worker nodes.&lt;/p&gt;

&lt;p&gt;I deployed to Minikube first and although it took me hours of reading and understanding different aspects (and debugging!), both versions eventually worked.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps to Deploy Three Apps (Flask, Node and Go) to Minikube and AWS EKS clusters
&lt;/h2&gt;

&lt;p&gt;I had already worked on the Dockerfiles for all three apps (using tutorials😅) and tested that they worked as expected using docker run and docker-compose up (for the two apps going into one pod).&lt;/p&gt;

&lt;p&gt;I also built and pushed their latest Docker images to my repository, so these steps focus on how to deploy them to Minikube and AWS EKS.&lt;/p&gt;

&lt;p&gt;These steps assume you have:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;installed minikube, kubectl,&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;images pushed to Docker Hub and Docker desktop running, and&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;installed and configured aws-cli.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  1) Deploy to Minikube
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Start the Minikube cluster
&lt;code&gt;minikube start
&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Create a deployment either using kubectl create deployment or a manifest file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;apiVersion: apps/v1&lt;br&gt;
kind: Deployment&lt;br&gt;
metadata:&lt;br&gt;
  name: sample-name-deploy&lt;br&gt;
spec:&lt;br&gt;
  replicas: 1&lt;br&gt;
  selector:&lt;br&gt;
    matchLabels:&lt;br&gt;
      app: sample-name&lt;br&gt;
  template:&lt;br&gt;
    metadata:&lt;br&gt;
      labels:&lt;br&gt;
        app: sample-name&lt;br&gt;
    spec:&lt;br&gt;
      containers:&lt;br&gt;
      - name: go-app&lt;br&gt;
        image: [image from repo]&lt;br&gt;
      - name: node-app&lt;br&gt;
        image: [image from repo]&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
Running this file with kubectl creates a deployment which launches 1 sample-name pod (replicas field is optional if you want only one pod and no replicas).&lt;/p&gt;

&lt;p&gt;&lt;code&gt;kubectl apply -f deployment.yml&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a service, also either using kubectl create or a manifest file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;&lt;br&gt;
apiVersion: v1&lt;br&gt;
kind: Service&lt;br&gt;
metadata:&lt;br&gt;
  name: sample-service&lt;br&gt;
spec:&lt;br&gt;
  type: NodePort&lt;br&gt;
  selector:&lt;br&gt;
    app: sample-name&lt;br&gt;
  ports:&lt;br&gt;
    - name: node-api&lt;br&gt;
      protocol: TCP&lt;br&gt;
      port: 80  #nodePort defaults to this value since it's undefined&lt;br&gt;
      targetPort: 3000&lt;br&gt;
    - name: go-api&lt;br&gt;
      protocol: TCP&lt;br&gt;
      port: 9090  #nodePort defaults to this value&lt;br&gt;
      targetPort: 8080&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Here, I'm using ports 3000 and 8080 as the ports for the service to listen for requests and pass to the appropriate container in the pod(s) created by the sample-name-deploy deployment.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;br&gt;
kubectl apply -f service.yml&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check whether the pod(s) successfully started and get the name(s), and run either of the containers(apps).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;&lt;br&gt;
kubectl get pods&lt;br&gt;
kubectl port-forward pod/[pod_name_here] --address 0.0.0.0 3000:3000&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You can also run the script below to run the apps in the browser using port forwarding (if you launched one pod):&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;br&gt;
export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')&lt;br&gt;
echo Name of Pod(s): $POD_NAME&lt;br&gt;
kubectl port-forward pod/$POD_NAME --address 0.0.0.0 3000:3000&lt;br&gt;
kubectl port-forward pod/$POD_NAME --address 0.0.0.0 8080:8080&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The app(s) should be running on their respective ports.&lt;br&gt;
The service can also be created as a LoadBalancer type to access the apps via an external IP.&lt;br&gt;
&lt;code&gt;&lt;br&gt;
apiVersion: v1&lt;br&gt;
kind: Service&lt;br&gt;
metadata:&lt;br&gt;
  name: sample-service&lt;br&gt;
spec:&lt;br&gt;
  type: LoadBalancer&lt;br&gt;
  ...&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
However, to use this type for Minikube, a tunnel must be started to get access to an external IP for the service (else, it just shows as pending when you run kubectl get svc).&lt;br&gt;
&lt;code&gt;&lt;br&gt;
kubectl tunnel&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy to AWS EKS
&lt;/h3&gt;

&lt;p&gt;Steps 2 and 3 remain the same but an EKS cluster (with at least one node group for worker nodes) must be created.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Create an EKS Cluster and node groups on the console (or use a .yaml file)&lt;br&gt;
You would need to have/create an IAM role to grant access to the Kubernetes control plane (aka EKS) to manage AWS resources; the same for the node groups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure kubectl to communicate with the cluster&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;&lt;br&gt;
aws eks update-kubeconfig --name [cluster-name]&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run steps 2 - 4; using port forwarding for the cluster also worked.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As mentioned earlier, you can create the service as a LoadBalancer to run the apps via an external IP. However, for AWS, this creates the legacy classic load balancer so I tried to find out how to use one of the current load balancer types instead.&lt;/p&gt;

&lt;p&gt;The steps to use an Application Load Balancer is more involved:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Follow the steps here to create AWS Load Balancer Controller. It was a very long process for me because I used kubectl, so you might want to try eksctl instead.&lt;br&gt;
I also had to ensure my subnets were tagged as specified in the guide to allow the load balancer "automatically pick up" resources it should pick up (and all the other requirements listed in the guide). After setting up the controller, you can create the ingress resource itself with this guide.&lt;br&gt;
I ran into quite several issues and could not get it working as expected, so I plan to come back to this later.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;There you have it!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I was able to successfully deploy all three apps. I still have pending issues to resolve with using an ALB instead of the default LB created by Kubernetes, but I'm positive I will find the solution soon!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>cloud</category>
      <category>chatgpt</category>
    </item>
  </channel>
</rss>
