<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Naomi Ansah</title>
    <description>The latest articles on DEV Community by Naomi Ansah (@naomi_ansah_d792faf7a1276).</description>
    <link>https://dev.to/naomi_ansah_d792faf7a1276</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/naomi_ansah_d792faf7a1276"/>
    <language>en</language>
    <item>
      <title>Don’t Lock Yourself Out of AWS: MFA Backup and IAM Best Practices.</title>
      <dc:creator>Naomi Ansah</dc:creator>
      <pubDate>Sun, 15 Mar 2026 11:48:23 +0000</pubDate>
      <link>https://dev.to/naomi_ansah_d792faf7a1276/dont-lock-yourself-out-of-aws-mfa-backup-and-iam-best-practices-248k</link>
      <guid>https://dev.to/naomi_ansah_d792faf7a1276/dont-lock-yourself-out-of-aws-mfa-backup-and-iam-best-practices-248k</guid>
      <description>&lt;p&gt;A few days ago, I noticed something strange with my phone. The screen had started behaving unpredictably. Sometimes it responded perfectly, and sometimes it ignored my touch completely.At first it just felt like a minor inconvenience. But then a thought crossed my mind that made me pause.My authenticator app was on that phone and that authenticator app was the only way I could generate the login codes for my Amazon Web Services account.&lt;/p&gt;

&lt;p&gt;Suddenly the situation felt much more serious. If the phone stopped working completely, I could lose access to my AWS account. That realization pushed me to take a closer look at how my account was secured. What started as a small precaution turned into a surprisingly valuable learning experience about MFA backups, IAM users, and AWS security best practices.&lt;/p&gt;

&lt;p&gt;When I first created my AWS account, I had already enabled Multi-Factor Authentication on the root user. At the time, I thought that was enough. AWS strongly recommends MFA, and I had followed that advice. But the phone issue made me realize something I hadn’t considered before. Security isn’t just about enabling MFA!&lt;/p&gt;

&lt;p&gt;It’s also about making sure you can still access your account if something happens to the device generating the authentication codes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3ylgzsnfa8gxw7jnipl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe3ylgzsnfa8gxw7jnipl.png" alt="&amp;lt;br&amp;gt;
 “Root account security credentials showing MFA enabled.”" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Phones get lost. Screens break. Devices fail. If my phone stopped working entirely, the authentication codes stored in that authenticator app would disappear with it. Recovering access to an AWS account in that situation can be difficult and stressful. That was the moment I decided I needed a backup MFA device. Fortunately, AWS allows multiple MFA devices to be registered for an account. I installed an authenticator extension on my laptop and connected it as a second MFA device. Now my authentication setup includes two devices generating login codes: my phone and my laptop.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjkhfj8tmdw5kl36zxaps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjkhfj8tmdw5kl36zxaps.png" alt="“Two AWS authentication tokens generated by separate devices.”" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
While going through this process, I also revisited another important AWS best practice: avoiding daily use of the root account. When you create an AWS account, the first identity you receive is the root user. The root account has unrestricted access to every service and resource in the account. Because of that level of power, AWS recommends using the root account only for critical account management tasks. Daily work should be done using users created through AWS Identity and Access Management. Following that recommendation, I created an IAM administrator user with full administrative permissions and enabled MFA for that user as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vnaqofzcg8t52dmwko2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vnaqofzcg8t52dmwko2.png" alt=" “IAM administrator user configured for daily AWS operations.”" width="800" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Of course, the process wasn’t completely smooth. At one point, AWS told me that my MFA token was out of sync. After a bit of investigation, I discovered the problem was simply that my laptop clock was slightly out of alignment with internet time servers. Once the system clock synchronized correctly, the authentication codes started working again.&lt;/p&gt;

&lt;p&gt;Another moment of confusion happened when I accidentally used the wrong MFA code during login. Because I now had two AWS tokens in my authenticator, it was easy to mix them up. One token was for the root account and the other for the IAM administrator user. Using the wrong one resulted in login failures that initially made me think something was broken. After completing everything, my account now follows a much safer structure.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wmd0cn313qh0vxzc1sh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wmd0cn313qh0vxzc1sh.png" alt="“Securing an AWS account by protecting the root user and performing daily work with an IAM administrator account.”" width="800" height="470"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Looking back, the entire process started with something as simple as a phone screen glitch. But that small problem forced me to rethink my AWS security setup and implement practices that I probably should have put in place much earlier.&lt;/p&gt;

&lt;p&gt;For anyone learning AWS, security fundamentals can sometimes feel less exciting than launching EC2 instances or deploying applications. But they are just as important. Enabling MFA, creating backup authentication devices, and using IAM users instead of the root account are small steps that can prevent major headaches later. In my case, a malfunctioning phone screen turned into a reminder that good cloud security starts with the basics.&lt;/p&gt;

&lt;h1&gt;
  
  
  aws#cloudsecurity#devops#cloudcomputing#beginners
&lt;/h1&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>security</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How I Secured My Static Website at the Edge Using Amazon CloudFront</title>
      <dc:creator>Naomi Ansah</dc:creator>
      <pubDate>Sat, 03 Jan 2026 18:18:40 +0000</pubDate>
      <link>https://dev.to/naomi_ansah_d792faf7a1276/how-i-secured-my-static-website-at-the-edge-using-amazon-cloudfront-12ho</link>
      <guid>https://dev.to/naomi_ansah_d792faf7a1276/how-i-secured-my-static-website-at-the-edge-using-amazon-cloudfront-12ho</guid>
      <description>&lt;p&gt;When I first deployed my portfolio as a static website, my focus was simple: getting it online. Like many beginners, I assumed security was something to think about later, perhaps at the server level or once traffic started to grow. As I learned more about how web traffic actually flows, I realized something important: the safest, cheapest, and smartest place to stop bad traffic is at the edge, before it ever reaches the origin.&lt;br&gt;
In this project, I secured my static portfolio entirely at the edge using Amazon CloudFront, long before any request touched my Amazon S3 bucket.&lt;br&gt;
At the time, I already had a working static website hosted in an S3 bucket. The site consisted only of static assets: HTML, CSS, JavaScript, and images. To improve performance and global availability, I placed the site behind a CloudFront distribution and configured the S3 bucket as the origin. While this setup distributed my content globally, it did not yet provide any protection at the edge. My objective was to ensure that traffic was encrypted, unnecessary or malicious requests were blocked early, security headers were applied automatically, and the S3 origin remained protected from direct or excessive access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4d1vglugddu4g8lsdsfv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4d1vglugddu4g8lsdsfv.png" alt="CloudFront serving cached content from edge locations, with Amazon S3 as the origin." width="800" height="552"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In CloudFront, edge security happens before caching and before requests are forwarded to the origin. This is achieved using CloudFront Functions, which run lightweight JavaScript code at CloudFront edge locations. I chose to run my function at the Viewer Request event, which is the earliest point in the request lifecycle. At this stage, CloudFront can inspect incoming requests immediately and decide whether they should be allowed or blocked.&lt;br&gt;
Blocking requests at this point is especially important because it prevents unnecessary traffic from reaching the S3 bucket, improves overall performance, lowers cost by reducing origin requests, and reduces the attack surface. For beginners, CloudFront Functions are easy to explore because the AWS Console includes a built-in editor with sample code, and AWS documentation clearly explains how to create and test functions safely.&lt;br&gt;
The first protection I added was enforcing HTTPS. Within the CloudFront distribution settings, I edited the default cache behavior and set the viewer protocol policy to redirect HTTP requests to HTTPS. With this configuration in place, any user attempting to access the site over an unencrypted connection is automatically redirected to HTTPS. This enforcement happens entirely at the edge and requires no changes to the website code itself.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqy3zit5tjsodh3zek9jm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqy3zit5tjsodh3zek9jm.png" alt="Enforcing HTTPS at the edge using CloudFront Viewer Protocol Policy (Redirect HTTP to HTTPS" width="521" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that, I created a CloudFront Function to block suspicious requests. Using the AWS Console, I navigated to CloudFront Functions and created a new function. The purpose of the function was straightforward: block requests targeting common WordPress administrative paths such as /wp-login.php and /wp-admin. Since my site is not a WordPress site, any request for these paths is unnecessary and potentially malicious.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstaxs6i5sagho5ztsu39.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fstaxs6i5sagho5ztsu39.png" alt="CloudFront Function code used to block suspicious request paths at the viewer request stage." width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8x466smq0sbauzpfg7u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq8x466smq0sbauzpfg7u.png" alt="Deployed CloudFront Function active at the edge." width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The function inspects the request URI and immediately returns a 403 Forbidden response when a match is found. If the request does not match a blocked path, it is allowed to continue normally. Once the function code was saved, I published it, which is required for the function to run at CloudFront edge locations. I then associated the function with the Viewer Request event by editing the default behavior of the CloudFront distribution. After saving the changes, I waited a few minutes for the distribution to deploy globally.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l8zthy215eailkr11u1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4l8zthy215eailkr11u1.png" alt="Associating a CloudFront Function with the Viewer Request event to block bad bots at the edge" width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To confirm that the protection worked as expected, I tested both normal and suspicious requests. The website loaded normally during regular access. However, when I manually navigated to a blocked path such as /wp-login.php, CloudFront immediately returned a 403 response, confirming that the request was stopped at the edge.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fprluujs6ruqbmaup109t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fprluujs6ruqbmaup109t.png" alt="CloudFront Function actively blocking a malicious request (/wp-login.php) at the edge" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To further strengthen the security of the site, I attached a Response Headers Policy to the CloudFront behavior. This allowed CloudFront to automatically add important security headers to every response sent to the browser. These headers enforce HTTPS usage, help prevent clickjacking, and stop browsers from interpreting content as a different MIME type than intended. Because the headers are injected at the edge, no changes to the site files were required.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkb04oj7tsgqcz1zej7d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkb04oj7tsgqcz1zej7d.png" alt="Verifying HTTPS delivery and security headers on the live site using browser developer tools" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With this setup in place, security decisions are made as early as possible in the request lifecycle. By enforcing HTTPS, blocking suspicious paths, and adding security headers at the edge, the S3 origin remains protected while the website benefits from improved performance and reduced risk. Most importantly, this approach does not rely on servers, backend code, or complex infrastructure. Everything is handled by CloudFront at the edge.&lt;br&gt;
If you already have a static website hosted on S3 and distributed through CloudFront, adding edge protection is a practical and achievable next step. CloudFront Functions provide a lightweight and effective way to inspect and block traffic early, without introducing operational complexity. By securing your website at the edge, you protect your origin, improve performance, and build a more resilient web presence.&lt;/p&gt;

&lt;h1&gt;
  
  
  aws #cloud #cloudfront #security#womenintech
&lt;/h1&gt;

</description>
      <category>architecture</category>
      <category>aws</category>
      <category>security</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Lost Your EC2 SSH Key? Here’s Every Way I Recovered Access</title>
      <dc:creator>Naomi Ansah</dc:creator>
      <pubDate>Sat, 27 Dec 2025 11:50:46 +0000</pubDate>
      <link>https://dev.to/naomi_ansah_d792faf7a1276/lost-your-ec2-ssh-key-heres-every-way-i-recovered-access-1i77</link>
      <guid>https://dev.to/naomi_ansah_d792faf7a1276/lost-your-ec2-ssh-key-heres-every-way-i-recovered-access-1i77</guid>
      <description>&lt;p&gt;The first time you lose an SSH key for an EC2 instance, it feels final. The server is running, your application is still alive, but the door is locked and the key is gone. I learned very quickly that AWS does not keep private keys for you, and once a .pem file is lost, it is lost forever.&lt;br&gt;
Instead of panicking, I decided to turn this moment into learning. I deliberately walked through every realistic recovery method available in Amazon Web Services, starting from zero, creating instances from scratch, breaking access on purpose, and then recovering it again. What I discovered is that “losing an EC2 key” is not a dead end. It’s a branching path with multiple recovery strategies, each suited for a different situation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding the Core Truth About EC2 Keys&lt;/strong&gt;&lt;br&gt;
Before touching any recovery method, there is one truth that must be clear. AWS never stores your private SSH key. The .pem file lives only on your local machine. The EC2 instance never sees it. What the instance stores instead is the public version of that key inside a file called authorized_keys.&lt;br&gt;
This means recovery is never about getting your old .pem file back. Recovery is always about getting temporary access and then adding a new public key.&lt;br&gt;
Once that clicked, everything else made sense.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Temporary Access with EC2 Instance Connect&lt;/strong&gt;&lt;br&gt;
The first recovery path I explored was EC2 Instance Connect. This method works only when the instance is running, the AMI supports it, the instance has network access, and port 22 is open. When all those conditions are met, AWS can temporarily inject a one-time public key into the instance and open a browser-based SSH session.&lt;br&gt;
What surprised me most is that this access is not time-limited in the way people assume. The injected key itself lives for about a minute, but once the SSH session starts, it behaves like any normal SSH connection. I stayed logged in for as long as I wanted. When I disconnected, the temporary key disappeared.&lt;br&gt;
This method doesn’t recover your old key, but it gives you a crucial foothold. From inside the instance, you can add a brand-new public key and restore permanent access. It’s fast, clean, and perfect for emergencies but it depends heavily on networking and AMI support.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01woktq75kxvtuakfi9g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F01woktq75kxvtuakfi9g.png" alt="Architecture diagram showing EC2 Instance Connect providing temporary SSH access to Linux and Windows EC2 instances in private subnets within a VPC&amp;lt;br&amp;gt;
" width="800" height="679"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70qfu0g0edge9xth3rx0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70qfu0g0edge9xth3rx0.png" alt="Browser-based EC2 Instance Connect terminal session connected to an Amazon Linux EC2 instance&amp;lt;br&amp;gt;
" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The “Surgery” Method: Detaching the Root Volume&lt;/strong&gt;&lt;br&gt;
Next, I practiced the most powerful and most intimidating method: detaching the root EBS volume. This approach works even when SSH is completely broken. The instance can be stopped, misconfigured, or unreachable, and recovery is still possible.&lt;br&gt;
The process feels like surgery. You stop the broken instance, detach its root volume, and attach that volume to a second helper instance in the same availability zone. From there, you mount the disk, navigate into the filesystem, and manually edit the authorized_keys file.&lt;br&gt;
While doing this on Amazon Linux 2023, I ran into a real-world issue that many tutorials skip. The filesystem is XFS, and because both disks were created from the same AMI, they shared the same UUID. XFS refuses to mount duplicate UUIDs unless you explicitly tell it to. Using the -o nouuid option was the key that made the mount succeed.&lt;br&gt;
After adding a new public key, fixing permissions, and reattaching the volume to the original instance, I started it again and logged in successfully. This method taught me more about Linux, filesystems, and AWS storage than any lab ever could. It’s not fast, but it works even when everything else fails.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzsrbmt0fmda87t1ofk8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdzsrbmt0fmda87t1ofk8.png" alt="Diagram illustrating EC2 key recovery by detaching a root EBS volume, attaching it to a helper instance, and modifying the authorized_keys file&amp;lt;br&amp;gt;
" width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe49vaj1jnscxo0seh4iw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe49vaj1jnscxo0seh4iw.png" alt="Linux terminal output showing attached and mounted EBS volumes used during EC2 root volume recovery&amp;lt;br&amp;gt;
" width="800" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recovering Access Without SSH Using Systems Manager&lt;/strong&gt;&lt;br&gt;
The cleanest recovery experience came from AWS Systems Manager Session Manager. Instead of relying on SSH at all, this method uses IAM and an encrypted control channel managed by AWS. There is no port 22, no .pem file, and no public SSH exposure.&lt;br&gt;
I launched an instance using Amazon Linux 2023 and attached an IAM role with the AmazonSSMManagedInstanceCore policy. After a short delay, the instance appeared as “managed” in Systems Manager Fleet Manager. From there, I opened a browser-based shell using Session Manager and gained access immediately.&lt;br&gt;
Inside the session, I verified that I was logged in as ssm-user, then elevated privileges and manually added an SSH public key. This meant I could later SSH normally if I wanted, but I no longer had to depend on SSH for access at all.&lt;br&gt;
This method feels like how EC2 is meant to be managed in production environments. It’s secure, auditable, and resilient to lost keys. If I had to choose one approach to standardize on, this would be it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb147rs45nazbk6qbgf6m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb147rs45nazbk6qbgf6m.png" alt="AWS Systems Manager Fleet Manager console displaying a managed EC2 instance available for Session Manager access&amp;lt;br&amp;gt;
" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F316nebvvouyhedib1chc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F316nebvvouyhedib1chc.png" alt="Diagram comparing direct SSH access with AWS Systems Manager Session Manager connectivity that does not require inbound ports&amp;lt;br&amp;gt;
" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Starting Fresh with an AMI&lt;/strong&gt;&lt;br&gt;
Finally, I explored the cleanest reset option: creating an AMI and launching a new instance. This method assumes you don’t care about preserving the original instance identity. You simply create an image of the server, launch a new instance from that image, and select a new key pair during launch.&lt;br&gt;
What I appreciated here is the simplicity. There is no filesystem mounting, no Linux surgery, and no emergency access needed. The tradeoff is that the instance ID and public IP change, but the operating system, software, and data remain intact.&lt;br&gt;
I also learned an important cleanup lesson. Deleting an AMI is not complete until the associated snapshot is deleted as well. Forgetting that step leaves behind storage charges that quietly accumulate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjgorfjodccpaoqnlhz3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjgorfjodccpaoqnlhz3.png" alt="Diagram showing EC2 AMI creation using EC2 Image Builder and CodePipeline, including encrypted snapshot sharing across AWS accounts&amp;lt;br&amp;gt;
" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2krdor8at7r00u45k3v0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2krdor8at7r00u45k3v0.png" alt="Diagram showing EC2 AMI creation using EC2 Image Builder and CodePipeline, including encrypted snapshot sharing across AWS accounts&amp;lt;br&amp;gt;
" width="620" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What This Taught Me About Cloud Engineering&lt;/strong&gt;&lt;br&gt;
Losing an EC2 SSH key stopped feeling scary once I understood that access and identity are separate concepts in AWS. The private key is just one authentication mechanism, not the server itself. Every recovery method I practiced reinforced that cloud infrastructure is designed to be recoverable, provided you understand the tools.&lt;br&gt;
More importantly, this journey shifted how I think about “best practice.” In learning environments, EC2 Instance Connect and AMI recovery are convenient. In real systems, Systems Manager Session Manager is the safest long-term strategy. And when everything is broken, volume attachment remains the ultimate escape hatch.&lt;br&gt;
If you’re learning AWS and haven’t practiced these scenarios yet, I strongly recommend doing so before you need them in real life. The first time you recover a server you thought was lost, something clicks and cloud engineering starts to feel real.&lt;/p&gt;

&lt;h1&gt;
  
  
  aws#ec2#cloudcomputing#learninginpublic#careergrowth#Womenintech
&lt;/h1&gt;

</description>
      <category>security</category>
      <category>aws</category>
      <category>devops</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Going Beyond Static Hosting: Deploying a Node.js Contact Form API on AWS EC2</title>
      <dc:creator>Naomi Ansah</dc:creator>
      <pubDate>Sat, 20 Dec 2025 12:48:20 +0000</pubDate>
      <link>https://dev.to/naomi_ansah_d792faf7a1276/going-beyond-static-hosting-deploying-a-nodejs-contact-form-api-on-aws-ec2-2ofd</link>
      <guid>https://dev.to/naomi_ansah_d792faf7a1276/going-beyond-static-hosting-deploying-a-nodejs-contact-form-api-on-aws-ec2-2ofd</guid>
      <description>&lt;p&gt;When people talk about hosting portfolios, the usual recommendation is static hosting using platforms like Amazon S3, Vercel, Netlify, or CloudFront. And honestly, that’s often the right architectural choice.&lt;br&gt;
But as part of my cloud learning journey, I wanted to understand what those managed services actually abstract away. Instead of stopping at static hosting, I decided to build and deploy a backend service on AWS EC2 to power the contact form on my portfolio.&lt;br&gt;
This article walks through what I built, why I chose EC2, and what I learned along the way.&lt;br&gt;
Why EC2 for a Contact Form?&lt;br&gt;
To be clear, EC2 is not the “best” tool for hosting a static portfolio. Managed services exist for a reason, and they usually provide better scalability, security, and simplicity for this kind of use case.&lt;br&gt;
However, EC2 is one of the best tools for learning. I wanted to understand how applications run on real servers, how backend services are deployed, how traffic flows through a system, and how uptime and reliability are handled in practice.&lt;br&gt;
Specifically, I wanted hands-on experience working with Linux servers, managing long-running processes, configuring reverse proxies, and handling real HTTP requests. That learning goal is what made EC2 the right choice for this project.&lt;br&gt;
What I Built&lt;br&gt;
For this project, I built a small Node.js and Express API with two simple endpoints. One endpoint handles health checks using a GET /api/health route, while the second endpoint, POST /api/contact, receives contact form submissions from my portfolio.&lt;br&gt;
Instead of relying on third-party form services, my frontend sends form data directly to this backend API, giving me full control over the request flow and server behavior.&lt;br&gt;
High-Level Architecture&lt;br&gt;
At a high level, the architecture looks like this: a user interacts with my portfolio in the browser, the request travels over the internet to an AWS EC2 instance, Nginx receives the incoming traffic, and then forwards API requests to the Node.js application running on port 3000 and managed by PM2.&lt;br&gt;
This setup mirrors how many real-world production systems are structured, even at much larger scales.&lt;br&gt;
Step-by-Step Overview&lt;br&gt;
I started by configuring the EC2 security group. SSH access on port 22 was restricted to my IP address, while HTTP traffic on port 80 was allowed from anywhere so the API could be publicly accessible.&lt;br&gt;
Next, I launched the EC2 instance using an Ubuntu AMI and a free-tier eligible instance type. Once the instance was running, I connected to it via SSH using a key pair. This was my first reminder that servers are not “deploy-and-forget” resources, access control and security matter from the very beginning.&lt;br&gt;
After connecting, I installed Node.js (LTS) and npm on the server. This allowed the backend to run in the cloud exactly as it did on my local machine.&lt;br&gt;
I then copied my backend project to the EC2 instance, installed the dependencies using npm install, and verified that the API worked by running node server.js directly on the server.&lt;br&gt;
Of course, running a Node application this way only works while the SSH session is active. To solve this, I introduced PM2, a process manager for Node.js applications. PM2 runs the application in the background, restarts it automatically if it crashes, keeps it running after SSH disconnects, and ensures it restarts on server reboot. This transformed my backend from a fragile process into a persistent service.&lt;br&gt;
Since my Node.js app listens on port 3000, I didn’t want to expose it directly to the public internet. I installed Nginx and configured it as a reverse proxy so that all public traffic comes in on port 80, while requests to /api/* are forwarded internally to the Node application. This step introduced me to real-world request routing and proxy behavior, including fixing a classic trailing-slash issue in the proxy pass configuration.&lt;br&gt;
Testing End-to-End&lt;br&gt;
Once everything was wired together, I tested the setup end-to-end. I used curl directly from the server, inspected requests in the browser’s &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F188lyf0o96klr9ss6hje.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F188lyf0o96klr9ss6hje.png" alt=" " width="800" height="343"&gt;&lt;/a&gt; Network tab, and submitted the contact form from the live portfolio itself.&lt;br&gt;
Seeing consistent 200 OK responses from a cloud-hosted backend felt very different from testing locally and made the entire system feel real.&lt;br&gt;
What I Learned&lt;br&gt;
This project helped me understand how backend services actually run on cloud servers, why process managers like PM2 are essential, and how Nginx fits into real production deployments. It also clarified the difference between simply running code and operating a service that needs to stay online.&lt;br&gt;
More importantly, it gave me confidence working with Linux, AWS EC2, Node.js backends, Nginx, and production-style workflows. It also reinforced why managed services exist and how much complexity they quietly remove.&lt;br&gt;
And that was exactly the goal.&lt;br&gt;
If you’re early in your cloud journey and curious about what happens under the hood, I highly recommend trying something similar.&lt;/p&gt;

&lt;h1&gt;
  
  
  AWS#Cloud#Node#Backend#Learninginpublic#WomeninTech
&lt;/h1&gt;

</description>
      <category>devops</category>
      <category>aws</category>
      <category>node</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>CloudFront +s3 Tutorial: How I Hosted my Portfolio Securely on AWS</title>
      <dc:creator>Naomi Ansah</dc:creator>
      <pubDate>Sat, 06 Dec 2025 16:30:21 +0000</pubDate>
      <link>https://dev.to/naomi_ansah_d792faf7a1276/cloudfront-s3-tutorial-how-i-hosted-my-portfolio-securely-on-aws-2lhi</link>
      <guid>https://dev.to/naomi_ansah_d792faf7a1276/cloudfront-s3-tutorial-how-i-hosted-my-portfolio-securely-on-aws-2lhi</guid>
      <description>&lt;p&gt;I recently deployed my developer portfolio on AWS using Amazon S3 and CloudFront without making the bucket public!&lt;/p&gt;

&lt;p&gt;Most tutorials make you disable public access on the S3 bucket, but I wanted a secure, production-ready setup. So instead, I kept S3 private and used CloudFront as the only public entry point.&lt;/p&gt;

&lt;p&gt;Here’s the exact process I followed, step by step.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Prerequisites *&lt;/em&gt;&lt;br&gt;
AWS account (Free Tier works)&lt;br&gt;
Static website build (React/Vite or plain HTML/CSS/JS)&lt;br&gt;
Basic familiarity with AWS Console&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: Build &amp;amp; Upload Portfolio Files to S3&lt;br&gt;
Before uploading, I created an optimized production build:&lt;br&gt;
For React/Vite:&lt;br&gt;
npm run build&lt;br&gt;
This created a dist/build folder containing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;index.html&lt;/li&gt;
&lt;li&gt;vite&lt;/li&gt;
&lt;li&gt;assets(Folder)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is what should be deployed  and NOT your source code.&lt;br&gt;
Then:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a bucket &lt;/li&gt;
&lt;li&gt;Keep "Block Public Access" ON&lt;/li&gt;
&lt;li&gt;Open S3 Console&lt;/li&gt;
&lt;li&gt;Upload the dist/build files into the bucket root&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt; : Enable Static Website Hosting (Optional)&lt;br&gt;
This helps with file access.&lt;br&gt;
Go to the bucket Properties&lt;br&gt;
Scroll to Static Website Hosting&lt;br&gt;
Enable it&lt;br&gt;
Set:&lt;br&gt;
index.html&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3&lt;/strong&gt; : Create CloudFront Distribution&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open CloudFront Console&lt;/li&gt;
&lt;li&gt;Click Create Distribution&lt;/li&gt;
&lt;li&gt;Select your S3 bucket as the origin&lt;/li&gt;
&lt;li&gt;For Origin Access, choose:&lt;/li&gt;
&lt;li&gt;Origin Access Control (OAC)&lt;/li&gt;
&lt;li&gt;CloudFront generates permissions&lt;/li&gt;
&lt;li&gt;Apply them to your bucket&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This keeps your S3 bucket private and protected, while CloudFront handles public routing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt; : Restrict Direct S3 Access&lt;br&gt;
CloudFront gives you a policy to paste into your bucket permissions.&lt;br&gt;
 This ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 is not publicly readable&lt;/li&gt;
&lt;li&gt;Only CloudFront can serve content&lt;/li&gt;
&lt;li&gt;This is the secure approach, unlike making S3 public.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 5&lt;/strong&gt; :Set Default Root Object&lt;br&gt;
In CloudFront distribution settings:&lt;br&gt;
Default Root Object: index.html&lt;/p&gt;

&lt;p&gt;This makes the portfolio load without typing index.html.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 6&lt;/strong&gt; : Wait for Deployment &amp;amp; Test&lt;br&gt;
It takes a few minutes for CloudFront to deploy.&lt;br&gt;
Then you get a domain like:&lt;br&gt;
d37dnhohvgt3x2.cloudfront.net&lt;br&gt;
I opened mine and…&lt;br&gt;
  Portfolio LIVE &amp;amp; SECURE!&lt;/p&gt;

&lt;p&gt;I love this setup because it keeps your s3 bucket private, follows production best practices, is still free tier friendly and CloudFront boosts performance globally.&lt;br&gt;
If you're deploying a portfolio or static site, S3 + CloudFront is a powerful and scalable solution.&lt;br&gt;
Feel free to drop questions 😀 I’m actively learning Cloud Engineering and happy to help!&lt;/p&gt;

&lt;h1&gt;
  
  
  aws #cloud #cloudfront #s3 #webdev #portfolio #tutorial
&lt;/h1&gt;

&lt;p&gt;If you found this helpful, please leave a ❤ ️ or bookmark it!&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
