<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: itsmenilik</title>
    <description>The latest articles on DEV Community by itsmenilik (@itsmenilik).</description>
    <link>https://dev.to/itsmenilik</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/itsmenilik"/>
    <language>en</language>
    <item>
      <title>Build a Serverless Event-Driven Application - Pixelator3000</title>
      <dc:creator>itsmenilik</dc:creator>
      <pubDate>Fri, 29 Sep 2023 00:07:20 +0000</pubDate>
      <link>https://dev.to/itsmenilik/build-a-serverless-event-driven-application-pixelator3000-4b2c</link>
      <guid>https://dev.to/itsmenilik/build-a-serverless-event-driven-application-pixelator3000-4b2c</guid>
      <description>&lt;h1&gt;
  
  
  Introduction:
&lt;/h1&gt;

&lt;p&gt;Starting a new AWS project is always invigorating. Lately, I've been diving into another cloud venture, driven by my keen interest in exploring event-driven architecture intricacies. I'm eager to share my experiences, particularly in configuring Lambda Roles for specific event handling.&lt;/p&gt;

&lt;h2&gt;
  
  
  WHAT DOES THE ARCHITECTURE LOOK LIKE 🤔
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu89ivz216gb5nullhe82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu89ivz216gb5nullhe82.png" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was experimenting with certain AWS features, S3 event notifications and Lambda. You're going to see how I used both of those technologies to implement a simple event-driven serverless image processing workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  HOW DOES EVERY COMPONENT FLOW?!?!!?!?!?
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We're going to configure S3 event notifications&lt;br&gt;
on an S3 bucket and we're going to call this bucket &lt;strong&gt;&lt;u&gt;&lt;em&gt;the source bucket&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now, &lt;strong&gt;&lt;u&gt;&lt;em&gt;when any images get uploaded&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt; to this S3 bucket, &lt;strong&gt;&lt;u&gt;&lt;em&gt;an event is going to be generated&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;That event will be passed through to &lt;strong&gt;&lt;u&gt;&lt;em&gt;a Lambda function&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt;, which &lt;strong&gt;&lt;u&gt;&lt;em&gt;will be invoked.&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;That &lt;strong&gt;&lt;u&gt;&lt;em&gt;Lambda&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt; function &lt;strong&gt;&lt;u&gt;&lt;em&gt;will gather all of the data contained within that event object&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It will &lt;strong&gt;load the original image&lt;/strong&gt; and it will process that image, in this case, &lt;strong&gt;&lt;u&gt;&lt;em&gt;generating five different pixelated versions of that image&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt; and &lt;strong&gt;putting them into a processed bucket.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Where Did You Learn This
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkipny0wqfe53s9dwd2pw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkipny0wqfe53s9dwd2pw.gif" alt="Image description" width="498" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am a big proponent of continuous learning and continuous development. So I did what anyone with internet would do, I searched YouTube for videos that could teach me the processes related to Database Migration. Lucky, I came across LearnCantrill videos and went through one of his Mini Projects.&lt;/p&gt;

&lt;p&gt;Here is a link to his channel &lt;a href="https://www.youtube.com/@LearnCantrill" rel="noopener noreferrer"&gt;https://www.youtube.com/@LearnCantrill&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I was able to find him through one of my favorite AWS Gurus Be A Better Dev.&lt;/p&gt;

&lt;p&gt;Here is a link to his channel &lt;a href="https://www.youtube.com/@BeABetterDev" rel="noopener noreferrer"&gt;https://www.youtube.com/@BeABetterDev&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  LETS GET STARTED 🤩
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Stage 1: Create the S3 Buckets
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;source bucket
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyiromksqxhnxw4v79udl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyiromksqxhnxw4v79udl.png" alt="Image description" width="800" height="784"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;processed bucket
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkad29j3k9rd467ps464t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkad29j3k9rd467ps464t.png" alt="Image description" width="800" height="783"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My journey began with the creation of S3 buckets. This was a familiar territory for me, as I had worked with S3 before. &lt;/p&gt;

&lt;p&gt;I set up the source buckets to store images that would trigger events, forming the basis of my event-driven architecture.&lt;/p&gt;

&lt;p&gt;I then set up the processed bucket once the lambda function has finished pixelated them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 2: Create the Lambda Role
&lt;/h2&gt;

&lt;p&gt;Creating the Lambda Role was the first step that presented a unique challenge. &lt;/p&gt;

&lt;p&gt;To build a serverless architecture, it's essential to grant the Lambda functions the necessary permissions to interact with AWS services. &lt;/p&gt;

&lt;p&gt;In my case, I needed to configure the Lambda Role to allow access to S3 buckets and, more importantly, define the specific events that would trigger the Lambda function.&lt;/p&gt;

&lt;p&gt;Here, I had to dive deep into AWS Identity and Access Management (IAM) policies and roles. I learned that IAM roles are crucial for security and least privilege access.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": 
    [
      {
        "Effect":"Allow",
        "Action":[
           "s3:*"
        ],
        "Resource":[
            "arn:aws:s3:::REPLACEME-processed",
            "arn:aws:s3:::REPLACEME-processed/*",
            "arn:aws:s3:::REPLACEME-source/*",
            "arn:aws:s3:::REPLACEME-source"
        ]

      },
      {
          "Effect": "Allow",
          "Action": "logs:CreateLogGroup",
          "Resource": "arn:aws:logs:us-east-1:YOURACCOUNTID:*"
      },
      {
          "Effect": "Allow",
          "Action": [
              "logs:CreateLogStream",
              "logs:PutLogEvents"
          ],
          "Resource": [
              "arn:aws:logs:us-east-1:YOURACCOUNTID:log-group:/aws/lambda/pixelator:*"
          ]
      }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I crafted a policy that granted the Lambda function the required permissions to access the source bucket to load in the original image. It can also access CloudWatch Logs to log out any status events&lt;br&gt;
and then it can access the processed bucket&lt;br&gt;
to store the output. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5q71okn7nr5endip884t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5q71okn7nr5endip884t.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 3: Create the Lambda Function
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import json
import uuid
import boto3

from PIL import Image

# bucketname for pixelated images
processed_bucket=os.environ['processed_bucket']

s3_client = boto3.client('s3')

def lambda_handler(event, context):
    print(event)

    # get bucket and object key from event object
    source_bucket = event['Records'][0]['s3']['bucket']['name']
    key = event['Records'][0]['s3']['object']['key']

    # Generate a temp name, and set location for our original image
    object_key = str(uuid.uuid4()) + '-' + key
    img_download_path = '/tmp/{}'.format(object_key)

    # Download the source image from S3 to temp location within execution environment
    with open(img_download_path,'wb') as img_file:
        s3_client.download_fileobj(source_bucket, key, img_file)

    # Biggify the pixels and store temp pixelated versions
    pixelate((8,8), img_download_path, '/tmp/pixelated-8x8-{}'.format(object_key) )
    pixelate((16,16), img_download_path, '/tmp/pixelated-16x16-{}'.format(object_key) )
    pixelate((32,32), img_download_path, '/tmp/pixelated-32x32-{}'.format(object_key) )
    pixelate((48,48), img_download_path, '/tmp/pixelated-48x48-{}'.format(object_key) )
    pixelate((64,64), img_download_path, '/tmp/pixelated-64x64-{}'.format(object_key) )

    # uploading the pixelated version to destination bucket
    upload_key = 'pixelated-{}'.format(object_key)
    s3_client.upload_file('/tmp/pixelated-8x8-{}'.format(object_key), processed_bucket,'pixelated-8x8-{}'.format(key))
    s3_client.upload_file('/tmp/pixelated-16x16-{}'.format(object_key), processed_bucket,'pixelated-16x16-{}'.format(key))
    s3_client.upload_file('/tmp/pixelated-32x32-{}'.format(object_key), processed_bucket,'pixelated-32x32-{}'.format(key))
    s3_client.upload_file('/tmp/pixelated-48x48-{}'.format(object_key), processed_bucket,'pixelated-48x48-{}'.format(key))
    s3_client.upload_file('/tmp/pixelated-64x64-{}'.format(object_key), processed_bucket,'pixelated-64x64-{}'.format(key))

def pixelate(pixelsize, image_path, pixelated_img_path):
    img = Image.open(image_path)
    temp_img = img.resize(pixelsize, Image.BILINEAR)
    new_img = temp_img.resize(img.size, Image.NEAREST)
    new_img.save(pixelated_img_path)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Above is the Lambda function code, keeping in mind that it would process events triggered by S3 bucket changes.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;First, it reads an environment variable from the Lambda runtime environment called processed_bucket. We have to provide it with the bucket to store the processed images.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then it uses the boto3 libraries to connect to S3. It receives an event, because remember, this is an event-driven workflow, so basically, when an object is uploaded to the source bucket, an event is generated and this event is passed into the Lambda function when the Lambda function is invoked.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next lambda creates a temporary name for this object, so a random string. It sets the path to download the source image to inside the temp folder of the Lambda runtime environment. It then uses the boto3 library to download that original object.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After that, it runs the pixelate function five times, generating five different versions of this image with various different sized pixels.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;&lt;em&gt;Essentially, all this function does is to decrease the size of the image and then increase the size again, which causes it to become pixelated.&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10upxgtavdms82n3h75o.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F10upxgtavdms82n3h75o.gif" alt="Image description" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We run this five times and then again, we use the boto3 library to upload each of those five new pixelated objects to the processed bucket.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;&lt;em&gt;Note:&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt; This python code uses python libraries that aren't defaulted in Lambda, so we will have to zip this .py file along with the Pillow library as a deployment zip.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 4: Configure the Lambda Function &amp;amp; Trigger
&lt;/h2&gt;

&lt;p&gt;When you're creating Lambda function from the console UI, you can often directly enter the code into it, but because we're using a function which requires additional libraries, we need to upload the deployment zip. &lt;/p&gt;

&lt;p&gt;Next we have few bits of configuration&lt;br&gt;
that we need to change to make sure this function invokes without any issues.&lt;/p&gt;

&lt;p&gt;To do that, we have to set an environment variable so that the function knows where to place the processed images, so the output images of this function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1jgo188469kohtgdanb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1jgo188469kohtgdanb.png" alt="Image description" width="800" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that's the function configured. The next step is to configure the trigger. So what causes this function to be invoked? We want it to be invoked&lt;br&gt;
based on S3 event notifications. So anytime an object is uploaded to the source bucket, we want it to invoke the Lambda function.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3mkqd3atnb4p6lsar3k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3mkqd3atnb4p6lsar3k.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We now have an event notification configured on our S3 bucket.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 5: Test and Monitor
&lt;/h2&gt;

&lt;p&gt;After setting up the entire system, rigorous testing was essential. I uploaded files to the S3 buckets and closely monitored how the Lambda function responded to the events. It was fascinating to see how my architecture seamlessly processed events in real-time, thanks to the power of serverless computing.&lt;/p&gt;

&lt;p&gt;For monitoring, I relied on AWS CloudWatch to keep an eye on Lambda function executions, error rates, and the performance of my event-driven system. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqlrqyibrmpopitzm53s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiqlrqyibrmpopitzm53s.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Stage 6: Cleanup
&lt;/h2&gt;

&lt;p&gt;As responsible cloud users, it's crucial to clean up resources when they're no longer needed. After thoroughly testing and monitoring my architecture, I followed AWS best practices by deleting the S3 buckets, Lambda function, and associated resources.&lt;/p&gt;

&lt;h1&gt;
  
  
  Final Thoughts:
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl06ig5hvoni3e056a4oz.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl06ig5hvoni3e056a4oz.gif" alt="Image description" width="400" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building a serverless event-driven architecture was a rewarding experience, and I gained valuable insights along the way. While I was familiar with creating S3 buckets, configuring the Lambda Role to handle specific events was a significant learning curve. AWS's robust IAM system and user-friendly interfaces made it possible for me to achieve this.&lt;/p&gt;

&lt;p&gt;I hope my journey serves as a helpful guide for anyone looking to embark on a similar adventure in building serverless event-driven architectures. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Coding the Cloud ☁️: A Deep Dive into AWS Database Migration Magic 🪄</title>
      <dc:creator>itsmenilik</dc:creator>
      <pubDate>Tue, 26 Sep 2023 02:02:47 +0000</pubDate>
      <link>https://dev.to/itsmenilik/my-epic-journey-with-aws-database-migration-service-a-step-by-step-odyssey-i14</link>
      <guid>https://dev.to/itsmenilik/my-epic-journey-with-aws-database-migration-service-a-step-by-step-odyssey-i14</guid>
      <description>&lt;p&gt;Have you ever faced the daunting task of migrating an on-premise database to the cloud? &lt;/p&gt;

&lt;p&gt;Well, I recently embarked on a journey to learn more about a technical enabler that is meant to do just that. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;HINT: If you haven't figured it out yet, its AWS Database Migration Service&lt;/strong&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  HOW IT ALL STARTED
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z43g420dwi3twbqg4ha.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z43g420dwi3twbqg4ha.gif" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the project's that I am in is tasked to create the central authoritative data domain hub for a client I am working for. To do this requires massive adaptability, huge problem solving skills, and a little perseverance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpoa53hg63urh22xzsg06.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpoa53hg63urh22xzsg06.gif" alt="Image description" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was tasked to capture requirements for a client. The client has faced a series of problems where they are retrieving all of their data from multiple source locations (mainframes, on-premise data centers, different AWS cloud infrastructures, etc). They would prefer to just have all of their data in one central location.&lt;/p&gt;

&lt;p&gt;However, to do this requires us to move a database that is on their platform onto ours. One of the solutions that my teammate suggested we use was AWS DMS. It would allow us to translate the client's current database schema onto an RDS database instance that is on our platform. That way, when the the migration is complete, they no longer have to worry about managing data on their platform. My project team can worry about organizing the treasury's data as we create the central authoritative data hub.&lt;/p&gt;

&lt;p&gt;In this blog post, I'll take you through my experience with using AWS Database Migration Service (DMS) in five epic stages.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;STAGE 1 : Provision the environment and review tasks&lt;/li&gt;
&lt;li&gt;STAGE 2 : Establish Private Connectivity Between the environments (VPC Peer)&lt;/li&gt;
&lt;li&gt;STAGE 3 : Create &amp;amp; Configure the AWS Side infrastructure (App and DB)&lt;/li&gt;
&lt;li&gt;STAGE 4 : Migrate Database &amp;amp; Cut over&lt;/li&gt;
&lt;li&gt;STAGE 5 : Cleanup the account&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Where Did You Learn This
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4i9pjaauna4q29umo87.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo4i9pjaauna4q29umo87.gif" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am a big proponent of continuous learning and continuous development. So I did what anyone with internet would do, I searched YouTube for videos that could teach me the processes related to Database Migration. Lucky, I came across &lt;strong&gt;&lt;u&gt;&lt;em&gt;LearnCantrill&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt; videos and went through one of his Mini Projects. &lt;/p&gt;

&lt;p&gt;Here is a link to his channel &lt;a href="https://www.youtube.com/@LearnCantrill" rel="noopener noreferrer"&gt;https://www.youtube.com/@LearnCantrill&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;I was able to find him through one of my favorite AWS Gurus &lt;strong&gt;&lt;u&gt;&lt;em&gt;Be A Better Dev&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Here is a link to his channel &lt;a href="https://www.youtube.com/@BeABetterDev" rel="noopener noreferrer"&gt;https://www.youtube.com/@BeABetterDev&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is The End Goal?!? SHOW ME SOME ARCHITECTURE
&lt;/h2&gt;

&lt;p&gt;Okay okay, settle hahaha. You're going to read about how I migrated a simple web application from an on-premises environment into AWS. The on-premises environment is a virtual web server simulated using EC2 and a self-managed MariaDB database server also simulated via EC2. &lt;/p&gt;

&lt;p&gt;This migration will happen in AWS and running the architecture on an EC2 web server together with an RDS managed SQL database. This migration is all happening because we are using the Database Migration Service, or DMS, from AWS. Now this is the architecture&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgv93a553r5dxkzgaydhi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgv93a553r5dxkzgaydhi.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  STAGE 1: Provision the Environment and Review Tasks
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xxch8c2rl3e9ytmb6v2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7xxch8c2rl3e9ytmb6v2.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Stage one is about implementing the base infrastructure. We will be creating the simulated on-premises environment on the left and the base AWS infrastructure on the right.&lt;/p&gt;

&lt;p&gt;The adventure began with provisioning the necessary AWS resources and reviewing the migration tasks. &lt;/p&gt;

&lt;p&gt;These resources were made from a Cloud Formation stack that created the following resources:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;&lt;em&gt;AWS Cloud Resources&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC&lt;/li&gt;
&lt;li&gt;Internet Gateway&lt;/li&gt;
&lt;li&gt;Internet Gateway Attachment&lt;/li&gt;
&lt;li&gt;Default Route Table&lt;/li&gt;
&lt;li&gt;Private Route Table&lt;/li&gt;
&lt;li&gt;Public Route Table&lt;/li&gt;
&lt;li&gt;Database Security Group&lt;/li&gt;
&lt;li&gt;Security Group Web Application&lt;/li&gt;
&lt;li&gt;Private Subnet A&lt;/li&gt;
&lt;li&gt;Private Subnet B&lt;/li&gt;
&lt;li&gt;Public Subnet A&lt;/li&gt;
&lt;li&gt;Public Subnet B&lt;/li&gt;
&lt;li&gt;Private A Route Table Association&lt;/li&gt;
&lt;li&gt;Private B Route Table Association&lt;/li&gt;
&lt;li&gt;Public A Route Table Association &lt;/li&gt;
&lt;li&gt;Public B Route Table Association&lt;/li&gt;
&lt;li&gt;DMS Instance Profile&lt;/li&gt;
&lt;li&gt;IAM Role&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;&lt;em&gt;On-Premises Resources&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;VPC&lt;/li&gt;
&lt;li&gt;Internet Gateway&lt;/li&gt;
&lt;li&gt;Internet Gateway Attachment&lt;/li&gt;
&lt;li&gt;Default Route Table&lt;/li&gt;
&lt;li&gt;Public Route Table&lt;/li&gt;
&lt;li&gt;Database Security Group&lt;/li&gt;
&lt;li&gt;Security Group Web Application&lt;/li&gt;
&lt;li&gt;Public Subnet&lt;/li&gt;
&lt;li&gt;Public Route Table Association&lt;/li&gt;
&lt;li&gt;DMS Instance Profile&lt;/li&gt;
&lt;li&gt;IAM Role&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You'll see that I've got two instances, CatWeb and CatDB. CatWeb is the simulated virtual machine web server and CatDB is the simulated virtual machine self-managed database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgoxv020l9ffybkm9pxm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgoxv020l9ffybkm9pxm.png" alt="Image description" width="800" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that every resource on the premise side is provisioned, we can take a look at the front facing website. We do this by copying the Public IPv4 DNS onto our web browser and searching for the url. Here is what it looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6nanvit3iqt43ifa8qp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6nanvit3iqt43ifa8qp.png" alt="Image description" width="800" height="466"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;&lt;em&gt;My Internal Thoughts&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
AWS Cloud Formation makes this stage surprisingly straightforward, allowing you to set up replication instances, source, and target endpoints with ease. AWS Cloud Formation Service is something I used in my previous project, so I was feeling exceptionally great since I took the time in my past experience to use this service before hand. It had me feeling like a becoming a Solutions Architect in no time.&lt;/p&gt;

&lt;h2&gt;
  
  
  STAGE 2: Establish Private Connectivity Between the Environments (VPC Peer)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzjnal08hzveazgskrjk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzjnal08hzveazgskrjk.png" alt="Image description" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This stages involves provisioning private connectivity between the simulated on-premises environment on the left and the AWS environment on the right.Now in production, you'd be using a VPN or Direct Connect, but to simulate that in this project, I'm going to be configuring a VPC Peering Connection. This will configure the connection between the on-premises and AWS environment, and then this will allow us to connect over this secure connection between these two VPCs.&lt;/p&gt;

&lt;p&gt;You'll see that I've created the connection by selecting the on-premise VPC to the AWS VPC. You can even see that we can select VPC's from another region or an AWS account. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaz9mlf6nkbuhp65tdhl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgaz9mlf6nkbuhp65tdhl.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, if I were to create these VPC Peers in separate AWS accounts, then one account would need to create the request, and the other account would need to accept it. Because we're creating both of these in the same account, then you can do both of these processes.&lt;/p&gt;

&lt;p&gt;The next step in this stage is to configure routing. We need to configure the VPC Routers in each VPC to know how to send traffic to the other side of the VPC Peer. To do this I had to go to the route table associated to the On-Premise VPC and edit the details&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwec2rt04s6ojlzpg4brd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwec2rt04s6ojlzpg4brd.png" alt="Image description" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The specific edits involve adding the AWS Cloud VPC  CIDR number as the Destination and the recently created Peering Connection as the Target.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhr4xobbjypkj9ltbcyu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhr4xobbjypkj9ltbcyu.png" alt="Image description" width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's one side of this peering relationship configured. Next, we need to edit both of the AWS Route Tables. Now the AWS cloud side has two Route Tables; the private Route Table and the public Route Table. We'll edit the public Route Table first.&lt;/p&gt;

&lt;p&gt;This time we'll need the on-premises VPC CIDR range&lt;br&gt;
as the Destination and the Peering Connection as the Target.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvvpza6nkhuoph8jqouk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpvvpza6nkhuoph8jqouk.png" alt="Image description" width="800" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We'll now edit the private Route Table next. We will also include the on-premises VPC CIDR range&lt;br&gt;
as the Destination and the Peering Connection as the Target.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstgmxbt33kn43eq3pru3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstgmxbt33kn43eq3pru3.png" alt="Image description" width="800" height="135"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's the routing configure for both sides of this VPC Peer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;&lt;em&gt;My Internal Thoughts&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
By establishing a private connection, I could guarantee the confidentiality and integrity of the data in transit. It was like forging a secret passage between two worlds.&lt;/p&gt;

&lt;h2&gt;
  
  
  STAGE 3: Create &amp;amp; Configure the AWS Side Infrastructure (App and DB)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqlmezt0vq37fm4prlbf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffqlmezt0vq37fm4prlbf.png" alt="Image description" width="800" height="420"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now in this stage of the project, I'm provisioning all of the infrastructure at the AWS Cloud side.&lt;/p&gt;

&lt;p&gt;I started by provisioning the database within AWS Cloud. This includes an RDS subnet group and an RDS managed database implementation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllt4rq0nqt0xngeluudb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fllt4rq0nqt0xngeluudb.png" alt="Image description" width="800" height="83"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm going to configure a Single-AZ implementation of RDS. That's going to be the end state database for this application migration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rekup7o67pk932l4mzy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6rekup7o67pk932l4mzy.png" alt="Image description" width="800" height="85"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'm also going to be provisioning an EC2 instance which will function as the web application server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fco7nuzl2xxx3zqkmsmkr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fco7nuzl2xxx3zqkmsmkr.png" alt="Image description" width="800" height="1013"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We then have to update the instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zlv5euri99s7rzk55nd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8zlv5euri99s7rzk55nd.png" alt="Image description" width="800" height="412"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we're installing the Maria DB command line tools. This will install Apache and Maria DB, specifically the command line tools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchz3okue6c3w3alwvu31.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchz3okue6c3w3alwvu31.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to make sure that the web server is both started, and set to start every time the instance reboots. Then we need to make it so that we can transfer the content from the on-premises web server across to this server. We're going to be using secure copy to perform that transfer. To make it easier we need to allow logins to this EC2 instance, using password authentication. After that we need to restart the SSH daemon, so that this config will take effect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ncvy57b7li6oay1q4ma.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0ncvy57b7li6oay1q4ma.png" alt="Image description" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we are going to SSH into the CatWeb server on-premise. We're going to copy the entire web root from this instance across the AWS web server.&lt;/p&gt;

&lt;p&gt;To do this we are going to copy the HTML folder&lt;br&gt;
to this destination: var/&lt;a href="http://www" rel="noopener noreferrer"&gt;www&lt;/a&gt;. Then we will copy all of the WordPress assets from this server to the AWS instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdsj4r5z2i352bisfs7l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdsj4r5z2i352bisfs7l.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we SSH into the awsCatWeb instance. We already copied those web assets into the home folder of the EC2 hyphen user. Now we will correct any permissions issues on those files that we've just copied.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1shqkklavmgnptbwvmeu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1shqkklavmgnptbwvmeu.png" alt="Image description" width="800" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, at this point, this instance should now be a functional WordPress application server, and it should be pointing at the on premises database server.&lt;/p&gt;

&lt;p&gt;At this point, users can connect to this EC2 instance and see the same application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;&lt;em&gt;My Internal Thoughts&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
There was a moment were I received the Apache testing page web interface message. At first this was not a good sign because I wasn't able to copy the word press html documents from the on-premise vpc onto the AWS cloud vpc public Subnet. Luckily I was able to trouble shoot the issue 😅 &lt;/p&gt;

&lt;h2&gt;
  
  
  STAGE 4: Migrate Database &amp;amp; Cut over
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F079ibq5rb64rg7wqjlr0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F079ibq5rb64rg7wqjlr0.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We're going to complete a database migration&lt;br&gt;
from CatDB through to the previously created RDS instance using AWS DMS.&lt;/p&gt;

&lt;p&gt;What we'll be doing is creating a DMS replication instance and using this to replicate all the data&lt;br&gt;
from the CatDB on-premises database instance&lt;br&gt;
across to RDS. It'll be using the DMS replication instance to act as an intermediary and it will be replicating all the changes through to the RDS instance. &lt;/p&gt;

&lt;p&gt;We start by creating a subnet group for the AWS cloud private subnets&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawunty92npwdvezz0me6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawunty92npwdvezz0me6.png" alt="Image description" width="800" height="874"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we create the replication instance. Details like selecting the correct DMS subnet group, vpc security groups, and instance class are important here&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ki5wipudj7jf84mnwqg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ki5wipudj7jf84mnwqg.png" alt="Image description" width="800" height="905"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now at that point, we can go ahead and configure the endpoints. You can think of these as the containers of the configuration for the source&lt;br&gt;
and destination databases.&lt;/p&gt;

&lt;p&gt;Here are the details regarding the Source Endpoint&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs141696p77ywrikl8v4f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs141696p77ywrikl8v4f.png" alt="Image description" width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here are the details regarding the Destination Endpoint&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmb1hqy9r9jxp7by4hk3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flmb1hqy9r9jxp7by4hk3.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point we want to start testing the ability of DMS to connect to both the source and the destination. After a few minutes, the status should change from testing to successful.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1pnw3a8it0qthm3mc0r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd1pnw3a8it0qthm3mc0r.png" alt="Image description" width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3klijui4th59129r71t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3klijui4th59129r71t.png" alt="Image description" width="800" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I then went over to Database Migration Task. This is the DMS functionality that uses the replication instance together with both of these endpoints that we've just configured. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ffixqseltebc7uwuqem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ffixqseltebc7uwuqem.png" alt="Image description" width="800" height="636"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I then went over to the Table Mappings section. Now the schema, which is just another way of talking about the database name or the section of the architecture that contains tables, needs some configuration. &lt;/p&gt;

&lt;p&gt;Inside the simulated on-premises environment&lt;br&gt;
on the self-managed database, all of the data is stored within a database is called a4lwordpress, so we need to enter that for schema name. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fryia985ym2oeo031nejz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fryia985ym2oeo031nejz.png" alt="Image description" width="800" height="855"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, this starts the replication task and it's doing a full load or a full migration from the source, which is catdbonpremises, which references the CatDB simulated on-premises virtual machine database server. It's transferring all this data &lt;br&gt;
to the RDS instance, which is a4lwordpress. This will go through a number of different states.&lt;/p&gt;

&lt;p&gt;It'll start off in the creating state, then it will move to the running state and then finally, it will move into load complete to completely finish. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63sumtinx7cjntcswtcd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63sumtinx7cjntcswtcd.png" alt="Image description" width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once that was complete, I went over to the AWS web application server (awsCatWeb), so that instead of pointing at the on-premises database instance, it now points at the RDS instance. We SSH back into that instance and edit the wp-config.php file to include the RDS instance endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hr5sd1tvgxp6dnvnbx5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hr5sd1tvgxp6dnvnbx5.png" alt="Image description" width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, there's one final thing that I needed to do.&lt;br&gt;
Wordpress has a strange behavior that the IP address that I installed the software on and first provisioned the database, is hard-coded into the database, so we need to update the database so that it points at our new instance IP address.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fav4j34rw4eifeyxt5b7h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fav4j34rw4eifeyxt5b7h.png" alt="Image description" width="800" height="107"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What this does is load the config of that file&lt;br&gt;
that you've just edited, so wp-config, gets the database username, the database password, the database name, and the host and it uses all that information to essentially replace the old IP address with the new information of this instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy1zkerdsaudv5adb47y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffy1zkerdsaudv5adb47y.png" alt="Image description" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So all that we have running at this point is the AWS based Wordpress web server which should now be pointing at the RDS instance, which should now contain the migrated data that was copied across by the Database Migration Service.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;u&gt;&lt;em&gt;My Internal Thoughts&lt;/em&gt;&lt;/u&gt;&lt;/strong&gt;&lt;br&gt;
The heart-pounding climax of my journey was the actual database migration. AWS DMS's replication capabilities kicked into high gear, seamlessly moving data from my on-premise database to the cloud. The ability to monitor and track the progress in real-time provided peace of mind. And then came the epic moment of cut over, where the final switch was flipped, and my application seamlessly transitioned to the cloud database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgituo3cci2otn8wcleg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgituo3cci2otn8wcleg.png" alt="Image description" width="800" height="2881"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  STAGE 5: Cleanup the Account
&lt;/h2&gt;

&lt;p&gt;As my migration story neared its conclusion, it was time to tidy up. AWS DMS allows for easy resource cleanup, ensuring that I only paid for what I used. The journey's end was met with cost-effectiveness and a sense of accomplishment.&lt;/p&gt;

&lt;p&gt;My adventure with AWS DMS was a thrilling ride through five stages of database migration. It demonstrated the power of cloud technology, making the once-daunting task feel like a heroic saga. If you're contemplating a database migration, fear not—AWS DMS is your trusty guide on this epic odyssey.&lt;/p&gt;

</description>
      <category>database</category>
      <category>programming</category>
      <category>cloud</category>
      <category>mariadb</category>
    </item>
    <item>
      <title>Learning by Doing: My Journey of Exploring a Project Similar to My Employer/Government Project</title>
      <dc:creator>itsmenilik</dc:creator>
      <pubDate>Fri, 02 Jun 2023 16:35:13 +0000</pubDate>
      <link>https://dev.to/itsmenilik/learning-by-doing-my-journey-of-exploring-a-project-similar-to-my-employergovernment-project-3bng</link>
      <guid>https://dev.to/itsmenilik/learning-by-doing-my-journey-of-exploring-a-project-similar-to-my-employergovernment-project-3bng</guid>
      <description>&lt;p&gt;This blog post shares my personal experience and work on a project that closely aligns with my current employer or government project. Feeling unfamiliar with the technical jargon and concepts being discussed, I decided to take a proactive approach by engaging in hands-on learning. I logged into my AWS (Amazon Web Services) account, provisioned the necessary resources, developed the required code, and embarked on a quest to gain a deeper understanding of the project. This article chronicles my journey, highlighting the challenges faced, the lessons learned, and the ultimate satisfaction of acquiring practical knowledge and expertise in the field. &lt;/p&gt;

&lt;p&gt;By sharing my story, I hope to inspire others to adopt a similar approach and embrace the power of experiential learning in their professional pursuits.&lt;/p&gt;

&lt;h1&gt;
  
  
  Building a Simple Data Lake on AWS: Harnessing the Power of Glue, Athena, RDS, and S3"
&lt;/h1&gt;

&lt;p&gt;I highlight the process of constructing a basic data lake on Amazon Web Services (AWS) by leveraging a combination of powerful services. Below is the list of services:&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS SERVICES
&lt;/h2&gt;

&lt;p&gt;• &lt;strong&gt;&lt;em&gt;Amazon Simple Storage Service (Amazon S3)&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
• &lt;strong&gt;&lt;em&gt;AWS Glue Studio&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
• &lt;strong&gt;&lt;em&gt;AWS Glue Data Catalog&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
• &lt;strong&gt;&lt;em&gt;AWS Glue Connections&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
• &lt;strong&gt;&lt;em&gt;AWS Glue Crawlers&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
• &lt;strong&gt;&lt;em&gt;AWS Glue Jobs&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
• &lt;strong&gt;&lt;em&gt;Amazon Athena&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
• &lt;strong&gt;&lt;em&gt;AWS CLI&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcy0lvmusue0lm85vx8m7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcy0lvmusue0lm85vx8m7.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To better understand the scope of this project, its best that I explain what is out of scope.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture (out of scope)
&lt;/h2&gt;

&lt;p&gt;• &lt;strong&gt;Change Data Capture (CDC)&lt;/strong&gt;: Handling changes to systems of record&lt;br&gt;
• &lt;strong&gt;Transactional Data Lake&lt;/strong&gt;: Table formats like Apache Hudi, Apache Iceberg, Delta Table&lt;br&gt;
• &lt;strong&gt;Fine-grained Authorization&lt;/strong&gt;: database-, table-, column-, and row-level permissions&lt;br&gt;
• &lt;strong&gt;Data Lineage&lt;/strong&gt;: Tracking data as it flows from data sources to consumption&lt;br&gt;
• &lt;strong&gt;Data Governance&lt;/strong&gt;: Managing the availability, usability, integrity and security of the data&lt;br&gt;
• &lt;strong&gt;Streaming Data&lt;/strong&gt;: Data that is generated continuously&lt;br&gt;
• &lt;strong&gt;Data Inspection&lt;/strong&gt;: Scanning data for sensitive or unexpected content (PI)&lt;br&gt;
• &lt;strong&gt;DataOps&lt;/strong&gt;: Automating testing, deployment, execution of data pipelines&lt;br&gt;
• &lt;strong&gt;Infrastructure as Code (laC)&lt;/strong&gt;: Infrastructure provisioning automation&lt;br&gt;
• &lt;strong&gt;Data Lake Tiered Storage&lt;/strong&gt;&lt;br&gt;
• &lt;strong&gt;Backup, HA, and DR&lt;/strong&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Understanding Data Lakes: Empowering Data-Driven Insights"
&lt;/h1&gt;

&lt;p&gt;A data lake is a centralized storage repository that allows organizations to store and manage vast amounts of raw and unstructured data. &lt;/p&gt;

&lt;p&gt;Unlike traditional data storage systems, data lakes accommodate data in its original format, without the need for upfront structuring or transformation. &lt;/p&gt;

&lt;p&gt;Databricks offers a framework to follow. There are three layers, Bronze, Silver, and Gold, for managing and processing data efficiently. &lt;/p&gt;

&lt;p&gt;The Bronze layer serves as the foundation, storing raw data directly from various sources. It provides a low-cost and reliable storage solution. &lt;/p&gt;

&lt;p&gt;The Silver layer focuses on data transformation and data quality checks. Here, data is cleaned, organized, and prepared for analysis. &lt;/p&gt;

&lt;p&gt;The Gold layer represents the final stage, where data is enriched, aggregated, and made available for business intelligence and advanced analytics. It provides a curated and optimized dataset for decision-making and extracting valuable insights. &lt;/p&gt;

&lt;p&gt;These layers work together to streamline the data pipeline and enable effective data analysis and decision-making processes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj05mw5abc9j4w5aako3y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj05mw5abc9j4w5aako3y.png" alt="Image description" width="800" height="309"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Lake Naming Conventions
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+-------------+---------------------------------------------------------------------+
| Prefix      | Description                                                         |
+-------------+---------------------------------------------------------------------+
| source_     | Data source metadata (Amazon RDS)                                   |
| bronze_     | Bronze/Raw data from data sources                                   |
| silver_     | Silver/Augmented data - raw data with initial ELT/cleansing applied |
| gold_       | Gold/Curated data - aggregated/joined refined data                  |
+-------------+---------------------------------------------------------------------+
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Most of the sample data that is stored in the databases are from AWS's website. It is called TICKIT. This small database consists of seven tables: two fact tables and five dimensions. You can load the TICKIT dataset via a csv file into your RDS instances to start extract, transforming, and loading the data into your S3 buckets. You can find the link to the dataset below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/redshift/latest/dg/c_sampledb.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/redshift/latest/dg/c_sampledb.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A while back, I had little knowledge about data models until I stumbled upon a fascinating illustration. It was during my exploration of the physical model design that I started to grasp the concept better. This stage proved crucial in understanding the intricate relationships, variable formatting, and schema that greatly influenced the work I was involved in.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19jeext8502oy3qxasoz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F19jeext8502oy3qxasoz.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is a conceptual model of the database tables that AWS uses.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71464scd1di2fc8itbs6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F71464scd1di2fc8itbs6.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The table below provides an illustration of how the RDS databases, including PostgreSQL, MySQL, and MS SQL, can represent systems commonly used by businesses. Specifically, we highlight an event management system, an e-commerce platform, and a customer relationship management (CRM) platform. This analysis helps us gain insights into the data tracking requirements of a company and the reasons behind capturing specific information about its business operations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v2ag00y3sjh3gujitly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v2ag00y3sjh3gujitly.png" alt="Image description" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Data Preparation &amp;amp; Instructions
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Step 1:
&lt;/h2&gt;

&lt;p&gt;We're going to use three AWS glue crawlers and the AWS glue connections to talk to our three data sources the Postgres SQL MySQL and SQL Server databases. We're going to catalogs the seven tables in the three Amazon RDS databases into our AWS glue data catalog&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2:
&lt;/h2&gt;

&lt;p&gt;We're going to copy the data from our three data sources, our three databases, our seven tables into the bronze or raw area of our data Lake using a series of AWS glue jobs based on Apache spark and written in Python&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3:
&lt;/h2&gt;

&lt;p&gt;We will cleanse augment and prepare the data for data analytics. We are using a series of AWS glue jobs. The data will be written into the silver area of our data Lake also in an Apache Parquet format. Once again the refined or silver data will be cataloged in our AWS glue data catalog.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4:
&lt;/h2&gt;

&lt;p&gt;Lately, we will will use Amazon Athena to produce curated data sets by joining several tables in the silver area of our data Lake. We will produce multiple views of the data and partition the data based on the most common query and filtering patterns of our end users. These curated data sets will be written back to the gold or curated section of our data Lake as partitioned Apache parquet format files.&lt;/p&gt;

&lt;h1&gt;
  
  
  FINAL SUMMARY
&lt;/h1&gt;

&lt;p&gt;In this demonstration we built a Data Lake on AWS:&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Using AWS glue and Amazon Athena we extracted data from our databases from Amazon RDS databases which represented our Enterprise systems (MySQL, Postgress 

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;We combined all those into our Data Lake into a bronze bucket of our Data Lake (raw data)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We then refined, augmented, and cleansed that data and wrote that into the silver bucket of our Data Lake (structured data)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;We then use that silver data to create curated data sets doing complex joins and aggregations and SQL functions on that data and wrote that data into the augmented or gold area of our data Lake (aggregated data)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then to finish up our demonstration we looked at some ways in which we can rate more efficient queries against our data Lake using Amazon Athena&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
&lt;br&gt;
&lt;br&gt;
&lt;br&gt;
Lessons Learned&lt;br&gt;
&lt;/h1&gt;


&lt;p&gt;Embarking on this project was not without its fair share of obstacles and hurdles. This section aims to shed light on the various issues and struggles encountered throughout the journey towards project completion. To be quite frank, this is where I list out all the complaints frustrations I had along the way. Enjoy hahaha:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Difficulty in understanding and configuring AWS Glue's various components and services.&lt;/li&gt;
&lt;li&gt;Issues in correctly configuring AWS Glue crawlers to establish connections with data sources.&lt;/li&gt;
&lt;li&gt;Challenges in setting up the necessary IAM roles and permissions for AWS Glue jobs.&lt;/li&gt;
&lt;li&gt;Lack of clear documentation or examples for specific use cases, leading to trial and error.&lt;/li&gt;
&lt;li&gt;Compatibility issues between different versions of AWS Glue and related dependencies.&lt;/li&gt;
&lt;li&gt;Troublesome configuration of database connection parameters such as credentials and endpoint URLs.&lt;/li&gt;
&lt;li&gt;Incompatibility between the IDE and database drivers, resulting in connection failures.&lt;/li&gt;
&lt;li&gt;Insufficient knowledge of database-specific connection options and configurations.&lt;/li&gt;
&lt;li&gt;Difficulty in troubleshooting connection issues due to limited error messages or log details.&lt;/li&gt;
&lt;li&gt;Delays caused by the need to navigate complex networking setups involving security groups, subnets, and internet gateways.&lt;/li&gt;
&lt;li&gt;Confusion in properly configuring security group rules to allow database connections from specific IP addresses or ranges.&lt;/li&gt;
&lt;li&gt;Lack of familiarity with network access control lists (ACLs) and their impact on connectivity.&lt;/li&gt;
&lt;li&gt;Misconfigurations of route tables and subnets, leading to failed network communications.&lt;/li&gt;
&lt;li&gt;Troublesome configuration of NAT gateways or instances for outbound internet access from private subnets.&lt;/li&gt;
&lt;li&gt;Firewall restrictions preventing successful connection establishment.&lt;/li&gt;
&lt;li&gt;Network latency issues affecting the responsiveness and performance of the database connection.&lt;/li&gt;
&lt;li&gt;Difficulties in maintaining consistent and reliable connectivity across different availability zones.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Despite the numerous challenges faced throughout this project, the journey has been incredibly rewarding. By diving headfirst into hands-on learning and persevering through the struggles, I have gained valuable insights, expertise, and a sense of accomplishment. This experience serves as a testament to the power of determination, adaptability, and the willingness to learn by doing.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Unleash Your Cloud Potential: AWS Re:Invent 2022's Latest Services</title>
      <dc:creator>itsmenilik</dc:creator>
      <pubDate>Wed, 08 Mar 2023 21:33:34 +0000</pubDate>
      <link>https://dev.to/itsmenilik/unleash-your-cloud-potential-aws-reinvent-2022s-latest-services-11lb</link>
      <guid>https://dev.to/itsmenilik/unleash-your-cloud-potential-aws-reinvent-2022s-latest-services-11lb</guid>
      <description>&lt;p&gt;AWS Re:Invent 2022 introduced several new services that can help businesses reduce costs, improve performance, and enhance their data analysis capabilities. Here's a look at some of the latest services and how they can benefit businesses.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Amazon S3 One Zone Infrequent Access&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Amazon S3 One Zone Infrequent Access is a new storage class that provides a lower-cost option for storing data that is not frequently accessed and doesn't require high durability. This service is ideal for businesses that need to store large amounts of data but don't require it to be highly available.&lt;/p&gt;

&lt;p&gt;To use this service, create a new S3 bucket and select "One Zone-Infrequent Access" as the storage class. Here's an example code snippet in Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

s3 = boto3.resource('s3')
bucket = s3.Bucket('my-bucket')

response = bucket.create(
    ACL='private',
    CreateBucketConfiguration={
        'LocationConstraint': 'us-west-2'
    },
    ObjectLockEnabledForBucket=True,
    StorageClass='ONEZONE_IA'
)

print(response)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Amazon Elastic Inference for Amazon ECS&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Amazon Elastic Inference for Amazon ECS is a new service that provides GPU acceleration for containerized applications. This service can help businesses improve their application's performance without the need for additional infrastructure.&lt;/p&gt;

&lt;p&gt;To use this service, add the Amazon Elastic Inference task definition parameter to your existing ECS task definition. Here's an example JSON snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "ipcMode": null,
  "executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
  "containerDefinitions": [
    {
      "name": "my-container",
      "image": "my-image",
      "portMappings": [
        {
          "containerPort": 80,
          "hostPort": 80
        }
      ],
      "cpu": 256,
      "memory": 512,
      "elasticInferenceAccelerators": [
        {
          "deviceName": "elastic-inference-1",
          "deviceType": "eia1.medium"
        }
      ]
    }
  ],
  "memory": "512",
  "taskRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
  "family": "my-task-family"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Amazon QuickSight Q&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Amazon QuickSight Q is a new service that uses natural language querying to help users find insights and answers from their data. This service can help businesses improve their data analysis and decision-making capabilities.&lt;/p&gt;

&lt;p&gt;To use this service, log in to the QuickSight console and select "New analysis." Then, select "QuickSight Q" as the data source and enter your natural language query. Here's an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Show me sales by product category
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Amazon RDS Proxy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Amazon RDS Proxy is a new service that allows businesses to improve the scalability and performance of their database workloads. This service enables applications to use connection pooling to manage connections to their databases, reducing the overhead associated with establishing new connections.&lt;/p&gt;

&lt;p&gt;To use this service, create a new proxy endpoint for your database and configure your application to use the proxy endpoint. Here's an example code snippet in Node.js:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const AWS = require('aws-sdk');
const rdsDataService = new AWS.RDSDataService();

const endpoint = 'my-database-proxy.endpoint.us-west-2.rds.amazonaws.com';
const database = 'my-database';
const user = 'my-user';
const password = 'my-password';

const params = {
  resourceArn: `arn:aws:rds:${process.env.AWS_REGION}:${process.env.AWS_ACCOUNT_ID}:cluster:${process.env.DB_CLUSTER_IDENTIFIER}`,
  secretArn: `arn:aws:secretsmanager:${process.env.AWS_REGION}:${process.env.AWS_ACCOUNT_ID}:secret:${process.env.DB_SECRET_NAME}`,
  sql: `SELECT * FROM my_table`,
  database,
};

const executeStatement = async () =&amp;gt; {
  const results = await rdsDataService.executeStatement(params).promise();
  console.log(results);
};

executeStatement();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;AWS Security Hub&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AWS Security Hub is a new service that provides a centralized view of security alerts and compliance status across AWS accounts. This service can help businesses improve their security posture and simplify compliance reporting.&lt;/p&gt;

&lt;p&gt;To use this service, enable Security Hub in your AWS account and configure your AWS resources to send security and compliance data to Security Hub. Here's an example code snippet in Python:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

securityhub = boto3.client('securityhub')

response = securityhub.batch_import_findings(
    Findings=[
        {
            'SchemaVersion': '2018-10-08',
            'Id': 'example-finding-1',
            'ProductArn': 'arn:aws:securityhub:us-west-2:123456789012:product/123456789012/default',
            'GeneratorId': 'example-generator-1',
            'AwsAccountId': '123456789012',
            'Types': [
                'Software and Configuration Checks/Vulnerabilities/CVE'
            ],
            'CreatedAt': '2022-01-01T00:00:00Z',
            'UpdatedAt': '2022-01-01T00:00:00Z',
            'Severity': {
                'Product': 4
            },
            'Title': 'Example Finding',
            'Description': 'This is an example finding',
            'Resources': [
                {
                    'Type': 'AwsEc2Instance',
                    'Id': 'i-0123456789abcdef0'
                }
            ],
            'Compliance': {
                'Status': 'FAILED',
                'StatusReasons': [
                    {
                        'ReasonCode': 'CERTIFICATE_AUTHORITY_ACCESS',
                        'Description': 'Certificate authorities (CA) used by the resource are not accessible.'
                    }
                ],
                'RelatedRequirements': [
                    'PCI DSS 3.2.1'
                ]
            }
        }
    ]
)

print(response)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AWS Re:Invent 2022 introduced several new services that can help businesses improve their data analysis capabilities, reduce costs, and enhance security. By leveraging these services, businesses can improve their operational efficiency and stay ahead of the competition.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Unlocking the Secrets of AWS EKS: How I Built a Scalable and Resilient User and Email Service Application</title>
      <dc:creator>itsmenilik</dc:creator>
      <pubDate>Thu, 19 Jan 2023 19:52:41 +0000</pubDate>
      <link>https://dev.to/itsmenilik/unlocking-the-secrets-of-aws-eks-how-i-built-a-scalable-and-resilient-user-and-email-service-application-2mo9</link>
      <guid>https://dev.to/itsmenilik/unlocking-the-secrets-of-aws-eks-how-i-built-a-scalable-and-resilient-user-and-email-service-application-2mo9</guid>
      <description>&lt;p&gt;Introduction:&lt;/p&gt;

&lt;p&gt;In today's fast-paced digital world, building a scalable and resilient application is crucial for any business. As more and more companies are moving their workloads to the cloud, it's essential to have a solid understanding of cloud technologies and best practices. In this blog post, I will show you how I built a scalable and resilient User and Email Service application on AWS EKS.&lt;/p&gt;

&lt;p&gt;Background:&lt;/p&gt;

&lt;p&gt;Recently, I worked on a project to create a User and Email Service application that allows users to sign up for a new account and receive a welcome email. The application was built using Python, Kafka Streams, and Kubernetes. The User Service publishes a message on a "Provision User" topic, and the Email Service consumes this message about a new user and sends a welcome email to them. The User and Email Services did not have to directly message each other, but their respective jobs were executed asynchronously.&lt;/p&gt;

&lt;p&gt;Building on AWS EKS:&lt;/p&gt;

&lt;p&gt;AWS EKS (Elastic Kubernetes Service) is a managed Kubernetes service that makes it easy to deploy, scale, and manage containerized applications using Kubernetes on AWS. To build our User and Email Service application on AWS EKS, we first had to create an EKS cluster. We used the AWS management console to create a new EKS cluster named "my-eks-cluster" and selected the desired VPC and subnets for the cluster. We also used an existing security group named "eks-cluster-sg" and an existing IAM role named "eks-cluster-role" for the cluster.&lt;/p&gt;

&lt;p&gt;Next, we had to deploy a Kafka cluster on the EKS cluster. We used the Helm chart to deploy the Kafka cluster on the EKS cluster and created a namespace named "kafka-ns" for the kafka cluster. We also created a values.yaml file for the chart, specifying the number of replicas, resources, and other configuration options. We used Helm to install the chart, passing in the values file and the namespace "kafka-ns".&lt;/p&gt;

&lt;p&gt;After deploying the Kafka cluster, we created and configured the necessary Kubernetes resources, such as Services and Deployments, to deploy the User service and Email service to the EKS cluster. We created a new namespace named "user-email-ns" for the User service and Email service. We created a Deployment resource for each service, specifying the number of replicas, resources, and other configuration options. We also created a Service resource for each service, specifying the desired type (ClusterIP, NodePort, LoadBalancer, etc.) and other configuration options. We then applied the resources to the cluster using kubectl or Helm.&lt;/p&gt;

&lt;p&gt;We also created and configured an AWS Elastic Load Balancer to route traffic to the User service and Email service. We used an existing Elastic Load Balancer named "my-elb" and updated the Service resource for each service to use the load balancer "my-elb".&lt;/p&gt;

&lt;p&gt;For persistence and other dependencies, we used an existing RDS instance named "my-rds" and an existing S3 bucket named "my-s3-bucket". We also created and configured the necessary IAM roles and policies to allow the User service and Email service to access the necessary AWS resources.&lt;/p&gt;

&lt;p&gt;For monitoring, we set up alarms and autoscaling rules, and also created and configured a CI/CD pipeline to automate the deployment of new versions of the application.&lt;/p&gt;

&lt;p&gt;Here is an example of a full production environment code package that uses Python, Kafka Streams, and Kubernetes to create a new user in an application:&lt;/p&gt;

&lt;p&gt;Code:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User Service:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from kafka import KafkaProducer

producer = KafkaProducer(bootstrap_servers='kafka-cluster:9092')

def create_user(user_data):
    producer.send('Provision User', key=user_data['id'], value=user_data)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Email Service:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from kafka import KafkaConsumer
from email.mime.text import MIMEText
import smtplib

consumer = KafkaConsumer('Provision User', bootstrap_servers='kafka-cluster:9092')

def send_welcome_email(user_data):
    message = MIMEText('Welcome, {}!'.format(user_data['name']))
    message['To'] = user_data['email']
    message['Subject'] = 'Welcome to our service!'
    smtp_server = smtplib.SMTP('smtp.example.com')
    smtp_server.send_message(message)

for msg in consumer:
    send_welcome_email(msg.value)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Kubernetes Configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: kafka-cluster
  labels:
    app: kafka-cluster
spec:
  ports:
    - name: kafka
      port: 9092
      targetPort: 9092
  selector:
    app: kafka-cluster
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-cluster
spec:
  replicas: 3
  selector:
    matchLabels:
      app: kafka-cluster
  template:
    metadata:
      labels:
        app: kafka-cluster
    spec:
      containers:
        - name: kafka
          image: confluentinc/cp-kafka:5.5.1
          ports:
            - containerPort: 9092
          env:
            - name: KAFKA_ADVERTISED_LISTENERS
              value: PLAINTEXT://kafka-cluster:9092
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: zookeeper-cluster:2181
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: user-service
  labels:
    app: user-service
spec:
  ports:
    - name: http
      port: 80
      targetPort: 80
  selector:
    app: user-service
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-service
spec:
  replicas: 3
  selector:
    matchLabels:
      app: user-service
  template:
    metadata:
      labels:
        app: user-service
    spec:
      containers:
        - name: user-service
          image: python:3.9
          ports:
            - containerPort: 80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: user-service
  namespace: user-email-ns
spec:
  replicas: 3
  selector:
    matchLabels:
      app: user-service
  template:
    metadata:
      labels:
        app: user-service
    spec:
      containers:
      - name: user-service
        image: python:3.9
        ports:
        - containerPort: 80
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration creates a Deployment named "user-service" in the "user-email-ns" namespace. The Deployment creates 3 replicas of a Pod running the "python:3.9" image and exposing port 80.&lt;/p&gt;

&lt;p&gt;Here is an example of the Kubernetes configuration for the User service Service:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
  name: user-service
  namespace: user-email-ns
spec:
  selector:
    app: user-service
  ports:
  - name: http
    port: 80
    targetPort: 80
  type: LoadBalancer

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This configuration creates a Service named "user-service" in the "user-email-ns" namespace. The Service routes traffic to Pods with the label "app: user-service" on port 80 and exposes the service on a LoadBalancer.&lt;/p&gt;

&lt;p&gt;In this blog post, I have shown you how I built a scalable and resilient User and Email Service application on AWS EKS. We have used Python, Kafka Streams, and Kubernetes to build the application and AWS EKS to deploy it to the cloud. We also used various other AWS services such as RDS, S3, and Elastic Load Balancer to provide persistence and other dependencies. This is just a small example of what is possible with AWS EKS and the possibilities are endless. It's important to continuously learn new cloud technologies and best practices to stay ahead in the game.&lt;/p&gt;

&lt;p&gt;NOTE: This project is overly simplified and does not accurately reflect the complexity and challenges of building a production-ready application on AWS EKS.&lt;/p&gt;

</description>
      <category>rust</category>
      <category>vue</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>AWS is Everywhere 🤩</title>
      <dc:creator>itsmenilik</dc:creator>
      <pubDate>Thu, 06 Oct 2022 19:32:05 +0000</pubDate>
      <link>https://dev.to/itsmenilik/aws-is-everywhere-411p</link>
      <guid>https://dev.to/itsmenilik/aws-is-everywhere-411p</guid>
      <description>&lt;p&gt;&lt;strong&gt;Fair warning: Long Post&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;TLDR: AWS is everywhere 🤩&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;This has happened to me on multiple occasions. I tend to find out that AWS supports every business that has had a huge impact in my life!&lt;/p&gt;

&lt;p&gt;To paint you a picture, anytime I would interact with a web service, I’d ask myself “What are the odds that AWS is involved in the infrastructure of this service?”. I ask myself this question everyday ever since I got certified.&lt;/p&gt;

&lt;p&gt;I always knew, with a great degree of confidence, that it was likely 99% of the time. Although, every time I guess right I'm always in for a surprise.&lt;/p&gt;

&lt;p&gt;It wasn’t until recently that my love for the cloud was reinvigorated! My daily routine consists of going to a dance studio to train in all styles of dance. The studio had their 4th year anniversary. To celebrate, they gave out free demo classes, gift bags, snacks, merch and etc.&lt;/p&gt;

&lt;p&gt;They also had a section in the studio where you could take group pictures. I thought it would good for me to interact with other and get my picture taken with them. The experience involved us taking pictures in a Photo Booth like environment, but without being constrained in a small box. Everyone was having the time of their lives. They would wear funky gear to make the picture taking experience that more enjoyable. I also wore the gear. Everyone was happy. From my perspective, It was an alright experience. The funny part was that I was happy, but for different reasons besides the picture. My enjoyment came from understanding the camera!&lt;/p&gt;

&lt;p&gt;Once we took our pictures, the photographer provided us with a QR code to scan a digital copy of the pictures. The camera was integrated with the screen. I scanned the QR code from the screen to get a copy. As I saw the weblink load on my screen, I realized that I received deeper insight than I initially realized. I checked to see what DNS the QR code led me to. To my surprise I found out that the picture we took was stored in an AWS S3 bucket! Everyone was accessing the photographers pictures from the companies S3 bucket.&lt;/p&gt;

&lt;p&gt;To add icing on the cake, I wanted to know if the S3 bucket still stored my picture. So I requested access to the weblink once more. It wasn’t there! The company set a life cycle configuration. The bucket was required to delete the object based file after a set amount of time! To give context, I accessed the link a day after I requested it for the first time.&lt;/p&gt;

&lt;p&gt;This is only one of the many experiences I’ve had in my life where I got to see how others are using AWS to handle their business needs. The kind of experience where I felt like all of the knowledge I've acquired throughout my life wasn't for nothing. That every action I took in my past to get me to this point in my life has allowed my to have a different perspective. The kind of perspective that has been able to challenge my beliefs for the better. It's always exciting to see what the future has in stored for me but for other! How else will AWS get involved? &lt;/p&gt;

&lt;p&gt;I’m well aware that others are using amazon web services. Even as I’m writing this, I'm 100% confident that someone other than me is having a similar euphoric experience. I’m just eternally grateful to have the capacity to not only see the bigger picture, but to have the knowledge and wisdom to understand what's happening around me. My eyes truly opened up when I dived deeper into the cloud.&lt;/p&gt;

&lt;p&gt;What other cloud related experiences will I find along the way?!?!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>DANCE WITH ME</title>
      <dc:creator>itsmenilik</dc:creator>
      <pubDate>Thu, 26 May 2022 17:35:22 +0000</pubDate>
      <link>https://dev.to/itsmenilik/dance-with-me-53l7</link>
      <guid>https://dev.to/itsmenilik/dance-with-me-53l7</guid>
      <description>&lt;h1&gt;
  
  
  BUT WHY DOE!
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;(Scroll Down for the link to my Dance Challenge Website)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Simply, why? &lt;strong&gt;What is it about moving our bodies to a song we love?&lt;/strong&gt; Why do we watch videos, obsess over our reflection in the kitchen window, and yes, take lessons to perfect something that could easily be labeled as trivial? Why do we put ourselves through the physical fatigue and the occasional social awkwardness just to call ourselves dancers? Why do we love it so?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv326vy8vjgstvjnatuw.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftv326vy8vjgstvjnatuw.gif" alt="Image description" width="640" height="640"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are many answers. For physical fitness, mental clarity, emotional stability, and other such pluses. &lt;/p&gt;

&lt;p&gt;I will be frank, I dance because I like to dance. It's that clear-cut. Every time I learn a new element of dancing, my curiosity is peaked. There is always new material to witness and learn to become an experienced dancer. Which is why I'll keep dancing till I meet my end. It is my passion. &lt;/p&gt;

&lt;h1&gt;
  
  
  OKAY GO ON
&lt;/h1&gt;

&lt;p&gt;For those of you that got this far, thank you. Some of you might not know what your passion is yet. It can be quite cumbersome to live for so long without discovering yours. Not knowing the one thing you can't stop thinking about, that thing you wake up thinking about in the morning, go to sleep thinking about at night, that thing that you would do for free! For those of you that enjoy dancing or are willing to try something new, check out my self-built website &lt;a href="https://www.thelightchallenge.ml/" rel="noopener noreferrer"&gt;The Light Challenge&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frdq4o3f4c0rsc3v3hyv6.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frdq4o3f4c0rsc3v3hyv6.PNG" alt="Image description" width="800" height="391"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h1&gt;
  
  
  WAIT YOU BUILT THIS?
&lt;/h1&gt;

&lt;p&gt;Besides having a love for dance, I enjoy the cloud! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9u1gsqxdeq3w5afq67xx.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9u1gsqxdeq3w5afq67xx.gif" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Specifically AWS (Amazon Web services). Since I took the time to learn about the cloud, I thought I'd leverage my skills and have them relate to dance.&lt;/p&gt;

&lt;h2&gt;
  
  
  CLOUD CRASH COURSE
&lt;/h2&gt;

&lt;p&gt;Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform, offering over 200 fully featured services from data centers. Cloud computing is the on-demand delivery of compute, database, storage, applications, and other IT resources through a cloud services platform (AWS) via the internet with a pay-as-you-go pricing.  &lt;/p&gt;

&lt;h3&gt;
  
  
  BUT HOW DUDE!
&lt;/h3&gt;

&lt;p&gt;I'm glad you asked. Some of the AWS services I used include AWS Certificate Manager, S3, Route 53, and Cloudfront. The only other resource that wasn't from AWS was the free domain name (A unique address for my website) I obtained from &lt;a href="https://www.freenom.com/en/index.html?lang=en" rel="noopener noreferrer"&gt;freenom.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9abnep71kp5i6qt0zqw2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9abnep71kp5i6qt0zqw2.png" alt="freenom" width="800" height="279"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is a brief description of the services I used&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Certificate Manager:&lt;/strong&gt; A Private Certificate Authority (CA) is a managed private CA service that helps you easily and securely manage the lifecycle of your private certificates. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simple Storage Service (S3):&lt;/strong&gt; An object storage service offering industry-leading scalability, data availability, security, and performance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Route 53:&lt;/strong&gt;  A highly available and scalable cloud Domain Name System (DNS) web service&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cloudfront:&lt;/strong&gt; A content delivery network (CDN) service built for high performance, security, and developer convenience. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Freenom:&lt;/strong&gt; A platform as a service domain registrar
S3, Route 53, and Cloudfront.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ENOUGH READING!!!
&lt;/h2&gt;

&lt;p&gt;Like the statement says above. Enough reading! Time for you to start dancing. If you want to know more about how I made this website,&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Send me a direct message below or follow me for more content&lt;/em&gt;&lt;br&gt;
&lt;strong&gt;LinkedIn&lt;/strong&gt;: &lt;a href="https://www.linkedin.com/in/meniliklemma/" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/meniliklemma/&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Instagram&lt;/strong&gt;: &lt;a href="https://www.instagram.com/itsmenilik/" rel="noopener noreferrer"&gt;https://www.instagram.com/itsmenilik/&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;TikTok&lt;/strong&gt;: &lt;a href="https://www.tiktok.com/@itsmenilik" rel="noopener noreferrer"&gt;https://www.tiktok.com/@itsmenilik&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>programming</category>
      <category>tutorial</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Hide Your Keys Hide Your Data</title>
      <dc:creator>itsmenilik</dc:creator>
      <pubDate>Mon, 13 Dec 2021 16:53:33 +0000</pubDate>
      <link>https://dev.to/itsmenilik/hide-your-keys-hide-your-data-ihf</link>
      <guid>https://dev.to/itsmenilik/hide-your-keys-hide-your-data-ihf</guid>
      <description>&lt;h2&gt;
  
  
  CONTEXT CONTEXT CONTEXT
&lt;/h2&gt;

&lt;p&gt;The more I learn about the cloud, the more excited I get about my journey. Let me paint a picture for you. Imagine getting the opportunity to go through a multitude of trainings. The kind of opportunity that harbors 継続的改善, (Keizoku-Teki Kaizen), continuous improvement. You are given a long list of courses/bootcamps/classes to choose from. All of these courses are categorized by different topics that relate to the Information Technology Industry (Digital Analytics, Agile Value Systems, Cybersecurity, Project Management, and etc). Each course is praised by it's own institute. As you read through the many options on this list, you start to wonder why there is such an abundance? That is neither here nor there. After looking through such a feeble list, only a couple courses catch your eye. Those course's include the Cloud Penetration Testing Boot-camp &amp;amp; Advanced Cloud Security Practitioner course. &lt;/p&gt;

&lt;h2&gt;
  
  
  BOOM WE GOT A WINNER!
&lt;/h2&gt;

&lt;p&gt;You might ask yourself, "Why these specific courses?" The answer is simple, because they are fun topics to me. In this post, we will focus on Cloud Penetration Testing first and get to Cloud Security later on. The information below will cover information that relates to testing and security in the cloud, but we will focus on AWS in this course. Here's why:&lt;/p&gt;

&lt;h2&gt;
  
  
  BE A PEST TO TEST, STATS ARE THE BEST
&lt;/h2&gt;

&lt;p&gt;Fun fact: 85% of business worldwide are already making use of cloud technology to store information. This is because cloud computing allows for more agility and flexibility for companies that don't want to have all their data on premise. As more and more services are hosted in the cloud, the need to adequately test their security measures with cloud hosts will increase. Here are a few other statistics that blew my mind:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;80% of breaches involve privileged credentials&lt;/li&gt;
&lt;li&gt;Cloud based BEC email Scams cost &lt;strong&gt;2.1 BILLION dollars&lt;/strong&gt; in &lt;strong&gt;cost&lt;/strong&gt; to US Businesses in 2020&lt;/li&gt;
&lt;li&gt;Through 2022, at least &lt;strong&gt;95%&lt;/strong&gt; of cloud &lt;strong&gt;security failures&lt;/strong&gt; are predicted to be the &lt;strong&gt;customer's fault&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;84% of organizations say traditional security solutions don't work in cloud environments&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbo83grxqf7gu0g3krbi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvbo83grxqf7gu0g3krbi.png" alt="Image description" width="" height=""&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  GET TO THE CODE ALREADY!!
&lt;/h2&gt;

&lt;p&gt;Settle your horses. In this post, we will be using CloudGoat. CloudGoat is Rhino Security Labs' "Vulnerable by Design" AWS deployment tool. It allows you to hone your cloud cybersecurity skills by creating and completing several "capture-the-flag" style scenarios. &lt;a href="https://github.com/RhinoSecurityLabs/cloudgoat" rel="noopener noreferrer"&gt;CloudGoat&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;We will go through a concept called enumeration. Consider enumeration as “Information Gathering”. This is a process where an attacker establishes an active connection with a victim and try to discover as much attack vectors as possible, which can be used to exploit systems further. In this example, we will be working within AWS. When attacking an AWS cloud environment, its important to use leverage unauthenticated enumeration whenever possible. This kind of IAM recon can help you gain a better understanding of the environment itself, the users and applications that are using the AWS environment, and other information. IAM Roles and other ‘insider knowledge’ is key for any cloud penetration test.&lt;/p&gt;

&lt;p&gt;It's possible to enumerate IAM users and roles without any keys (or other inside knowledge) to a target account. This allows for information gathering that can potentially expose who is using the environment, what 3rd party services are being utilized,  AWS services utilized, and more. This all happens without any logs (CloudTrail or otherwise) being created in the victim’s account.&lt;/p&gt;

&lt;p&gt;There was an old attack method which was used to enumerate the existence of IAM roles. This relied on the Simple Token Server (STS) AssumeRole API. This API allowed for enumeration of IAM roles because it would return an error message if the role existed that was different than the error message if the role did not exist.&lt;/p&gt;

&lt;p&gt;When attempting to assume a role that exists, but you didn’t have permissions to assume, the API would return a message like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;An error occurred (AccessDenied) when calling the 'AssumeRole' operation: User: arn:aws:iam::012345678901:user/MyUser is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::111111111111:role/aws-service-role/rds.amazonaws.com/AWSServiceRoleForRDS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This AWS error message is basically saying that the user “MyUser” in the account “012345678901” is not allowed to assume the role “AWSServiceRoleForRDS” in the account “111111111111”. This message revealed that the role existed.&lt;/p&gt;

&lt;p&gt;If the role did not exist, then the following error message would be returned instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;An error occurred (AccessDenied) when calling the AssumeRole operation: Not authorized to perform sts:AssumeRole
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since this research was released, AWS security made a change to the API so that the STS AssumeRole API will return the same error message, regardless of whether the role exists of not. Now, you will see this error message instead:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;An error occurred (AccessDenied) when calling the AssumeRole operation: Access denied
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, there is a new attack method that can still enumerate roles in other AWS accounts. This method involves IAM role trust policies. When setting up an IAM role trust policy, you are specifying what AWS resources/services can assume that role and gain temporary credentials. Let’s consider the following IAM role trust policy, which allows the “Test” role from the account ID “216825089941” to assume it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::216825089941:role\/Test"},"Action":"sts:AssumeRole"}]}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we look at the trust relationships tab in the IAM web console, this is what we see:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgjg6mmaa0ompbhwfmso.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgjg6mmaa0ompbhwfmso.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, if we go and delete the “Test” role, then look at the trust relationships page again, we will see something different:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvanpic5ga31dsgzclp1f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvanpic5ga31dsgzclp1f.png" alt="Image description" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, if we hit “Edit trust relationship”, we will see that same value specified as the principal in the trust policy, but if we click “Update Trust Policy”, we will be shown this error message:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flj90lg44g4gowz0598db.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flj90lg44g4gowz0598db.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, what happened? We didn’t change the value, but it was changed to an invalid value automatically. This is because when you save the trust policy document of a role, AWS security will find the resource specified in the principal somewhere in AWS to ensure that it exists. If the resource is found, the trust policy will save successfully, but if it is not found, then an error will be thrown, indicating an invalid principal was supplied. When AWS does this on the back-end, it takes the ARN that you supplied (“arn:aws:iam::216825089941:role/Test” for us) and matches it to a unique identifier that correlates to the resource in AWS. Then, when we deleted the “Test” role, AWS was no longer able to match the ARN we specified to an AWS resource, so by default, it will replace the ARN with the unique ID that was associated with that resource originally.&lt;/p&gt;

&lt;h3&gt;
  
  
  WHY DOES THIS MATTER?
&lt;/h3&gt;

&lt;p&gt;There are potentially multiple reasons that this is done, but the best example is as follows.&lt;/p&gt;

&lt;p&gt;Let’s say that a role allows the IAM user “Mike” in account “111111111111” to assume it. Mike is then fired from the company and has his AWS user deleted. Then, a week later, a new, different “Mike” is hired to the company and has an IAM user “Mike” created for him. Because of how AWS originally associates “Mike” to that unique ID (“AROAJUFJY2PBF22P35J4A” in our example above), the new “Mike” that just got hired would not be able to assume that original role, even though he has the same user name.&lt;/p&gt;

&lt;p&gt;To allow the new Mike to assume that old role, the trust policy of the old role would need to be updated to allow access to the same ARN as before, but the update allows AWS to re-associate that ARN with the new “Mike” that exists, rather than the old “Mike” that doesn’t exist.&lt;/p&gt;

&lt;p&gt;This is helpful in preventing situations where the old “Mike” was supposed to have more access than the new “Mike”, but because they had the same name, the new “Mike” gained additional privileges on accident. Instead, this problem is solved by associating ARNs to unique IDs for IAM resources.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Use AWS to Start Sneakerbotting!</title>
      <dc:creator>itsmenilik</dc:creator>
      <pubDate>Tue, 04 May 2021 00:33:14 +0000</pubDate>
      <link>https://dev.to/itsmenilik/use-aws-to-start-sneakerbotting-3dcl</link>
      <guid>https://dev.to/itsmenilik/use-aws-to-start-sneakerbotting-3dcl</guid>
      <description>&lt;h1&gt;
  
  
  Whatcha talkin bout Willis!
&lt;/h1&gt;

&lt;p&gt;The internet has changed sneaker culture so drastically that you used to have to visit Footlocker, Mom &amp;amp; Pop shops, and check Eastbay for the latest releases or just show up and find something super dope to wear. Now all information regarding sneakers is one click away. No more waking up early in the morning and waiting in line just to get a chance to purchase the most popular sneaker out there.&lt;/p&gt;

&lt;h1&gt;
  
  
  Now What?
&lt;/h1&gt;

&lt;p&gt;Well now, it's a lot more difficult to obtain limited sneakers because of this ease of access. Now a days people are using sophisticated computer programs to obtain sneakers within seconds during the checkout process. This is much faster than what any human could do manually. These programs are more commonly referred to as bots. Bots have become essential to obtaining limited sneakers online.&lt;/p&gt;

&lt;p&gt;When it comes to sneaker botting, there is a lot to it. There are proxies, tasks, internet speed, server type, and the actual program/application. They all determine how well you can obtain a limited sneaker.&lt;/p&gt;

&lt;p&gt;In this blog post, we are going to specifically talk about servers and their purpose in sneaker botting. &lt;/p&gt;

&lt;p&gt;When it comes to getting a limited sneaker, speed is the name of the game. So in order to achieve better speeds, you can set up a separate computer for faster speeds. You just use your computer to connect to it. By doing this you get faster internet speeds, better pc specs (such as ram or cpu power), the ability to run more tasks, and have a better botting experience.&lt;/p&gt;

&lt;p&gt;You might be asking yourself, "Why wouldn't I run a sneaker bot on my home computer"? Well, you run into the possibility of your computer crashing or slowing down because of its limited capabilities.&lt;/p&gt;

&lt;p&gt;On the other hand, if you run your bot in a server provider such as AWS, then you are getting their speed and their connection. Which is generally better.&lt;/p&gt;

&lt;h1&gt;
  
  
  OKAY ... I'm Listening
&lt;/h1&gt;

&lt;p&gt;Now, this where I'll show you how to set up an EC2 instance. EC2 provides scalable computing capacity in the AWS Cloud. Think of it as a virtual machine in the cloud.&lt;/p&gt;

&lt;p&gt;First you will want to sign up for a free account:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc82g2jc5krg763zuv7xv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc82g2jc5krg763zuv7xv.png" alt="Alt Text" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you are all signed in and ready to go, you will want to get into the AWS management console select EC2:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbbybppsfygvzg4qgcs3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbbybppsfygvzg4qgcs3.png" alt="Alt Text" width="800" height="524"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From there you will want to select Launch Instances:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ijj5s2i3h1t7aunxo1c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ijj5s2i3h1t7aunxo1c.png" alt="Alt Text" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then you will want to select the Amazon Machine Image. In this example, we will select Microsoft 2016 BASE since it is eligible for the free tier account:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ekocukrf737r3teqf8m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ekocukrf737r3teqf8m.png" alt="Alt Text" width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we will want to select the instance type. This is where you can increase the compute specs such as cpu and memory:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl475ikca85p8ndvmm5e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhl475ikca85p8ndvmm5e.png" alt="Alt Text" width="800" height="176"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now this is an important step. This is where you will create your key pair. A key pair consists or a public and private key that allow you to connect to your EC2 instance securely. This will get downloaded on your computer and you will want to make sure not one else has access this key pair: &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwggapk5eabvog86lk7rt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwggapk5eabvog86lk7rt.png" alt="Alt Text" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once you have downloaded your key pair, you can now launch your EC2 instance and then begin to connect to it by right clink on your instance:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5bu2ogykhbzovpksw71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5bu2ogykhbzovpksw71.png" alt="Alt Text" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you will want to select RDP client, hit Get Password, upload your key pair file, and then download the remote desktop file to your desktop:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21mibeu9u0jn8sqttx6s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21mibeu9u0jn8sqttx6s.png" alt="Alt Text" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next you will want to connect to the remote desktop file, connect, and enter the password from you client:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fndl0wd0phutaa9zbrrtw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fndl0wd0phutaa9zbrrtw.png" alt="Alt Text" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The last step you have to do is just wait for your computer to connect to the EC2 instance and then install the bot application on it. In the picture below, you can see that I have download the &lt;a href="https://www.nikeshoebot.com/" rel="noopener noreferrer"&gt;nikeshoebot&lt;/a&gt; on my EC2 instance:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzv6t785fzmsv3uu5crf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyzv6t785fzmsv3uu5crf.png" alt="Alt Text" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy Botting!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>discuss</category>
      <category>cloud</category>
      <category>virtualmachines</category>
    </item>
    <item>
      <title>WHAT THE CRUD IS THIS!</title>
      <dc:creator>itsmenilik</dc:creator>
      <pubDate>Fri, 30 Apr 2021 05:27:48 +0000</pubDate>
      <link>https://dev.to/itsmenilik/what-the-crud-is-this-479d</link>
      <guid>https://dev.to/itsmenilik/what-the-crud-is-this-479d</guid>
      <description>&lt;h1&gt;
  
  
  UH?!?!
&lt;/h1&gt;

&lt;p&gt;Soooooooo. You might be asking yourself. What the CRUD is this? Well, if you know, you know. This is my failed attempt to humor you guys hahaha.&lt;/p&gt;

&lt;p&gt;But really, this post is about a Create, Read, Update, and Delete application (CRUD). To be more specific, the application contains a frontend web client (Angular) and a backend rest api (SpringBoot) that retrieves information from a relational database. Oh, I also forgot to mention that this application makes use of docker containers. All of which is pointed to a domain name with the help or AWS Route 53 Hosted Zones.&lt;/p&gt;

&lt;h2&gt;
  
  
  PICTURE IT
&lt;/h2&gt;

&lt;p&gt;This is how you can picture the architecture:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa24t5c2n3gz28c3vpigs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa24t5c2n3gz28c3vpigs.png" alt="Alt Text" width="800" height="727"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you've noticed, the architecture includes aws elastic container service. ECS is a fully managed container orchestration service. This is where my docker containers are deployed. You can choose to run your containers in clusters using AWS Fargate. AWS Fargate is serverless compute for containers. I took advantage of this feature to reduce cost since this application does not take much computing power.&lt;/p&gt;

&lt;h2&gt;
  
  
  START IT UP VROOM VROOM
&lt;/h2&gt;

&lt;p&gt;I started off by creating a directory with two folders. One for the frontend angular web framework. The other for the backend SpringBoot framework. This is a quick look into the angular framework code: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcc759iujjrfgzjfz53w6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcc759iujjrfgzjfz53w6.png" alt="Alt Text" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These are conditions that call out functions to help Get, Create, Update, Delete certain information from the database through the use of the Rest API.&lt;/p&gt;

&lt;h2&gt;
  
  
  TOOT IT AND BOOT IT
&lt;/h2&gt;

&lt;p&gt;I then created Spring MVC controllers with @Controller and map requests with request mapping annotations e.g. @RequestMapping, @GetMapping, @PostMapping, @PutMapping, @DeleteMapping.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn164l3u6q5c0tu64q817.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn164l3u6q5c0tu64q817.png" alt="Alt Text" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Spring MVC provides an annotation based approach where you don’t need to extend any base class to express request mappings, request input parameters, exception handling, and more. @Controller is similar annotation which mark a class as request handler.&lt;/p&gt;

&lt;p&gt;In the above code, the EmployeeController class acts as request controller. The methods will handle all incoming requests to a specific URI. These requests are the same requests in the angular web framework. &lt;/p&gt;

&lt;h2&gt;
  
  
  DATA DATA DATA
&lt;/h2&gt;

&lt;p&gt;I decided to use RDS as the database. Specifically MySQL. This is so I'd practice decoupling. Decoupling an application basically refers to the process of splitting the application in smaller and independent components. One of the big advantages of decoupling is that it reduces inter-dependencies so failures do not impact other components.&lt;/p&gt;

&lt;p&gt;After Starting the frontend and backend end, I was able to create, record, update, and delete records to RDS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25jisoj9aei2n272n1sg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F25jisoj9aei2n272n1sg.png" alt="Alt Text" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  WHATS UP DOCK
&lt;/h2&gt;

&lt;p&gt;After I was able to run the test locally, I had to build these components into containers. These were constructed with Dockerfiles. Before we discuss what a Dockerfile is, it is important to know what a Docker image is. A Docker Image is a read-only file with a bunch of instructions. When these instructions are executed, it creates a Docker container. A Dockerfile is a simple text file that consists of instructions to build Docker images.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauhcohyvqd9d3ad0tjy7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauhcohyvqd9d3ad0tjy7.png" alt="Alt Text" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once I finished this, It was about time to deploy this into ECS. Also, I forgot to mention that we incorporated an nginx reverse proxy. I did this so I could run my API server on a different network or IP then my frontend application is on. By doing this, I can then secure this network and only allow traffic from the reverse proxy server.&lt;/p&gt;

&lt;h2&gt;
  
  
  YOU CAN'T CONTAIN ME!
&lt;/h2&gt;

&lt;p&gt;I wont go into too much detail on how I set up the containers and the Route 53 hosted zone. This is a basic run down as to what is happening:&lt;br&gt;
     - Two clusters were created&lt;br&gt;
     - Each cluster has it's own task definition (container)&lt;br&gt;
     - The frontend contains a service. This service was created to attach an application load balancer.&lt;br&gt;
     - This load balancer is listening to port 80 with the help of a target group, which is the same port as the frontend application.&lt;br&gt;
     - This same load balancer is targeted by the Route 53 hosted zone.&lt;br&gt;
     - The hosted zone is associated to a domain name where you can search for the application on any web browser.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvvcss7t572bn7t07ak0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgvvcss7t572bn7t07ak0.png" alt="Alt Text" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  FINISH EM!
&lt;/h2&gt;

&lt;p&gt;After setting up the architecture, the application looks like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0lp45pmfun7onrpq5nf.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl0lp45pmfun7onrpq5nf.gif" alt="Alt Text" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If I learned anything about this project, It's that the cloud is where it's at!.&lt;/p&gt;

</description>
      <category>angular</category>
      <category>aws</category>
      <category>docker</category>
      <category>javascript</category>
    </item>
    <item>
      <title>Down The Hole Ya Go</title>
      <dc:creator>itsmenilik</dc:creator>
      <pubDate>Fri, 09 Apr 2021 16:45:58 +0000</pubDate>
      <link>https://dev.to/itsmenilik/down-the-whole-ya-go-1fm6</link>
      <guid>https://dev.to/itsmenilik/down-the-whole-ya-go-1fm6</guid>
      <description>&lt;h1&gt;
  
  
  INTRO
&lt;/h1&gt;

&lt;p&gt;YouTube ad sense is a big portion of it's profit stream. Soooooo we are going to stop that! Well not really. However, we can block all of their ad's from reaching us. A while back I set up a Pi Hole and I thought I'd explain my journey into the setup. Pi Hole is a DNS filtering tool that can be configured to block advertisements, trackers, malicious websites, and malware. &lt;/p&gt;

&lt;p&gt;We are going to do this with the help of a Raspberry Pi.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9tvt1foxipi64hiqmy52.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9tvt1foxipi64hiqmy52.png" alt="Alt Text" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  WHY THOUGH!
&lt;/h1&gt;

&lt;p&gt;Traditional ad blockers are set up on a web browser. They typically analyze the data that's coming from your visited web page and either replace or remove advertisements. This works great for that one browser. I ran into a problem where everyone in my house doesn't have this installed or even knows what an ad block extension is. Pi Hole is a great solution as it can block ad on all of the devices within my house (iphones, androids, and computers). &lt;/p&gt;

&lt;h1&gt;
  
  
  Basic Concept
&lt;/h1&gt;

&lt;p&gt;Let's say you launch a mobile game on your phone. As it's launching, all the assists are loading (texters, dialog, music, etc). At the same time, a request is being made by the game's url DNS server, which is managing the ads for that application. Your router then looks up that url and replies to your phone with a specific ip address that contains the ad. Instead of storing ads to application this method allows advertisers to dynamically serve individually tailored ads.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jo99w3m7xrp8xr2zy76.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jo99w3m7xrp8xr2zy76.PNG" alt="Alt Text" width="385" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pi Hole can block this process, or stand in between the DNS server and your device. Pi Hole will contain a blacklist of sites to block. If your device is attempting to retrieve information from a blacklisted site, then it would reply to your device with an unspecified address. Which would not show the ad on your device. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplja8hyc0k3hp7wyuzfc.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fplja8hyc0k3hp7wyuzfc.PNG" alt="Alt Text" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Setups
&lt;/h1&gt;

&lt;p&gt;This involved the following steps&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Install Raspbian on a micro sd&lt;/li&gt;
&lt;li&gt;Change the default passwords&lt;/li&gt;
&lt;li&gt;Configure a static IP for my raspberry pi&lt;/li&gt;
&lt;li&gt;SSH into the pi (I didn't want to connect a monitor to my pi)&lt;/li&gt;
&lt;li&gt;Run the one line install command on the my raspberry pi terminal&lt;/li&gt;
&lt;li&gt;Get to the networking selection and set up the default blacklists&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Whats cool about these steps is that I was exposed to my terminal more than usual!.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnsayrivgr2x9882zvoj.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwnsayrivgr2x9882zvoj.PNG" alt="Alt Text" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All that's left if the connect the the ip address and we are in!&lt;/p&gt;

&lt;p&gt;Finished Product&lt;br&gt;
At the end of it, you get this dashboard where you can manage your blacklist whitelist, devices, and certain statistics :&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3ugwm3u170o2nio6hw5.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3ugwm3u170o2nio6hw5.PNG" alt="Alt Text" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Overall all, this is a dandy project. If you are interested in in getting your hands dirty and want to setup your own pi hole here is the link &lt;a href="https://pi-hole.net/" rel="noopener noreferrer"&gt;Pihole&lt;/a&gt;&lt;/p&gt;

</description>
      <category>raspberrypi</category>
      <category>linux</category>
      <category>bash</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Challenge Accepted!!!!</title>
      <dc:creator>itsmenilik</dc:creator>
      <pubDate>Thu, 08 Apr 2021 13:19:03 +0000</pubDate>
      <link>https://dev.to/itsmenilik/challenge-accepted-5c6a</link>
      <guid>https://dev.to/itsmenilik/challenge-accepted-5c6a</guid>
      <description>&lt;h1&gt;
  
  
  HOW IT ALL STARTED!
&lt;/h1&gt;

&lt;p&gt;As I was looking through certain job posts and building my network on LinkedIn, I went on DuckDuckGo to search for projects that involved Amazon Web Services (AWS). Towards the end of 2020 and the beginning of 2021, I had already received my AWS cloud practitioner and solutions architect associate certifications. Shout out to my friend Logan for inspiring me! &lt;/p&gt;

&lt;p&gt;There was a post on reddit that mentioned and gave a link to the &lt;a href="https://cloudresumechallenge.dev/" rel="noopener noreferrer"&gt;Cloud Resume Challenge&lt;/a&gt;. This single webpage could not have made me any happier. I found what I was looking for! I red all the instructions and thought to myself&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12jujhd0xsytfakuxzu8.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F12jujhd0xsytfakuxzu8.gif" alt="Alt Text" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  INITIAL TOUGHTS
&lt;/h2&gt;

&lt;p&gt;The whole Idea behind the challenge is to setup a website that displays your resume.&lt;/p&gt;

&lt;p&gt;Here is a brief overview of the instructions:&lt;br&gt;
    1. Get the AWS cloud practitioner cert&lt;br&gt;
    2. Set up the html web document&lt;br&gt;
    3. Set up the css web document&lt;br&gt;
    4. Host the static website on S3&lt;br&gt;
    5. Enable HTTPS security through CloudFront&lt;br&gt;
    6. Configure a domain name with Route53&lt;br&gt;
    7. Include Javascript to retrieve the number of website visits&lt;br&gt;
    8. Store/update that number in DynamoDB&lt;br&gt;
    9. Create an API that accepts requests from your webpage to retrieve that number from DynamoDB&lt;br&gt;
    10. Use python to manipulate the number of website visits through Lambda&lt;br&gt;
    11. Make tests to assure code functionality&lt;br&gt;
    12. Create your infrastructure through code&lt;br&gt;
    13. Set Up a CI/CD pipeline so you can automatically update your code through a repository&lt;br&gt;
    14. Create a blog of the project!&lt;/p&gt;

&lt;p&gt;The steps above are not exact, but it's how I interpreted it after scanning them for the first time.&lt;/p&gt;

&lt;p&gt;My initial reactions were great! I already received the certification, so I felt like I've built up some great momentum. During my studies, I already knew the concepts behind S3, DynamoDB, Route53, CloudFront, Lambda, API Gateway, and etc. I WAS RARING TO GO! &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5rep0sw2pk2323qcg10.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs5rep0sw2pk2323qcg10.gif" alt="Alt Text" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1 DOC, 2 DOC, 3 DOC, 4 👍
&lt;/h2&gt;

&lt;p&gt;I have some background in web development, so creating the HTML/CSS/JAVA documents weren’t all too hard. For me, I always want to make sure that the content I envision is what I execute. There was a bunch of code that I needed to adjust as I was reaching my sense of PERFECTION! Plus, I didn’t really use a template to begin with.&lt;/p&gt;

&lt;p&gt;There were plenty of times where I needed to take a break, but I was so eager to finish this challenge. Nothing could stop me! I’m the kind of person that sees their goal to the end. That’s what’s fun!&lt;/p&gt;

&lt;h2&gt;
  
  
  HTTPS/DOMAIN TIME
&lt;/h2&gt;

&lt;p&gt;There wasn’t much to setting up my website. All I needed to do was obtain a domain name, make sure my S3 bucket name is similar to my website name, provision a SSL certificate for HTTPS requests, and then create cloud front distribution to connect everything.&lt;/p&gt;

&lt;p&gt;Here is an idea of how the architecture looks like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxj0ga79azcb17a6t9j02.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxj0ga79azcb17a6t9j02.png" alt="Alt Text" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I had finished my static web documents, loaded them to the S3 bucket, made it public, and had my website up and running!&lt;/p&gt;

&lt;h2&gt;
  
  
  ONE TABLE FOR MR. LAMBDA &amp;amp; MRS. API
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfj888s6cfur3cf1umb0.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxfj888s6cfur3cf1umb0.gif" alt="Alt Text" width="480" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I had great energy after completing almost half of the instructions. Then the rest of the steps hit me pretty hard. Creating the Dynamodb table was easy. Getting my API and LAMBDA function to work was where I struggled a bit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3o8wthz3izr3e38jku4.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi3o8wthz3izr3e38jku4.gif" alt="Alt Text" width="500" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Working with Lambda/python was a humbling experience. I kept getting an issue with my code not allowing my return statement to parse my counter in json. My counter would always be in a decimal format. I also couldn’t get my table to update. I wasn’t all too familiar with python, so I had to take a quick crash course on the language. This typically involves me watching a lot YouTube videos on my breaks. Luckily, I was able to join the Cloud Resume Discord and get some insight on some of the mistakes I made.&lt;/p&gt;

&lt;p&gt;I eventually got my API, Table, and Lambda function to work with one another. It was funny. Earlier, I did some research to find out what stumped people during this portion of the challenge. A good amount of them said setting up Cross-Origin Resource (CORS) was an issue. This whole time I thought I believed my issue was CORS. Come to find out, my javascript was what caused me a huge headache. My CORS setup was perfectly fine! You live and learn I guess.&lt;/p&gt;

&lt;h2&gt;
  
  
  AUTOMATIC SUPERSONIC
&lt;/h2&gt;

&lt;p&gt;All that was left was to create my Infrastructure as code. The Amazon White Pages were of great help. The instructions mentioned AWS SAM. This was a pretty cool portion of the challenge. I wasn’t too familiar with SAM, but it came out to be a very handy tool. The whole concept behind SAM is to create your Infrastructure through code, YAML,  and not manually provision your AWS resources.&lt;/p&gt;

&lt;p&gt;I noticed that I already had pieces to the puzzle, but not all the pieces. When I created my resources (API, Table, Lambda), I created them through the console. I learned real quick that this wasn’t ideal. There were multiple times where I tried to import my existing resources onto CloudFormation and somehow twist that into a SAM template. I eventually learned that this wasn’t the best decision. After reading the white pages and several templates, I eventually scrapped my old resources and was able to transform my resources into code.&lt;/p&gt;

&lt;p&gt;All that was left was to create my CI/CD pipelines. One for my frontend code and another for my backend. I did this through Github Actions. Which by the way is awesome! I was able to get templates through the Github Actions marketplace, fine tune them, and automatically push my code through my repositories.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmprff847mhdobde3qerv.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmprff847mhdobde3qerv.gif" alt="Alt Text" width="320" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  FINAL THOUGHTS
&lt;/h2&gt;

&lt;p&gt;I genuinely felt great every step of the way. There were plenty of hurdles I had to over come. Some harder than others. Believe me when I say this, It was absolutely worth the struggle.&lt;/p&gt;

&lt;p&gt;If I had to to some up a few lessons I’ve learned, it would be the following:&lt;br&gt;
    1. Do your due diligence. Study the ropes.&lt;br&gt;
    2. You don’t know everything out there. It’s okay to ask for help sometimes.&lt;br&gt;
    3. Learn to back track. Find out what you could have done better!&lt;br&gt;
    4. Enjoy the process! It’s fun.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>python</category>
      <category>javascript</category>
    </item>
  </channel>
</rss>
