<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kaustav Dey</title>
    <description>The latest articles on DEV Community by Kaustav Dey (@kaustav_dey_).</description>
    <link>https://dev.to/kaustav_dey_</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kaustav_dey_"/>
    <language>en</language>
    <item>
      <title>if you're reading this, then just like the post , it helps</title>
      <dc:creator>Kaustav Dey</dc:creator>
      <pubDate>Wed, 11 Jun 2025 10:29:13 +0000</pubDate>
      <link>https://dev.to/kaustav_dey_/if-youre-reading-this-then-just-like-the-post-it-helps-l68</link>
      <guid>https://dev.to/kaustav_dey_/if-youre-reading-this-then-just-like-the-post-it-helps-l68</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/kaustav_dey_" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3079186%2F25017b2c-085e-42b8-9890-84de99aea8a3.webp" alt="kaustav_dey_"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/kaustav_dey_/built-a-serverless-image-optimizer-with-aws-lambda-s3-free-tier-safe-44of" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;🚀 Built a Serverless Image Optimizer with AWS Lambda + S3 (Free Tier Safe)&lt;/h2&gt;
      &lt;h3&gt;Kaustav Dey ・ Jun 11&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#aws&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#serverless&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#lambda&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#python&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
      <category>python</category>
    </item>
    <item>
      <title>🚀 Built a Serverless Image Optimizer with AWS Lambda + S3 (Free Tier Safe)</title>
      <dc:creator>Kaustav Dey</dc:creator>
      <pubDate>Wed, 11 Jun 2025 10:12:47 +0000</pubDate>
      <link>https://dev.to/kaustav_dey_/built-a-serverless-image-optimizer-with-aws-lambda-s3-free-tier-safe-44of</link>
      <guid>https://dev.to/kaustav_dey_/built-a-serverless-image-optimizer-with-aws-lambda-s3-free-tier-safe-44of</guid>
      <description>&lt;p&gt;TL;DR: Upload an image to S3 → Lambda compresses it → Optimized version appears.&lt;br&gt;
Fully serverless, beginner-friendly, and no surprise AWS bills.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;👋 Introduction&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Image compression is a common but repetitive task — especially for indie developers or small teams deploying images to the cloud. So I decided to build an automated image optimizer using AWS Lambda and S3.&lt;/p&gt;

&lt;p&gt;This project runs 100% on the AWS Free Tier, and I’ve included teardown scripts to help you avoid any unwanted charges.&lt;/p&gt;

&lt;p&gt;Whether you’re learning AWS, building portfolio projects, or automating real-world workflows — this one’s for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  ⚙️ What I Built
&lt;/h2&gt;

&lt;p&gt;Whenever you upload an image to a specific S3 bucket, it automatically triggers a Lambda function which:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reads the image&lt;/li&gt;
&lt;li&gt;Compresses it using Pillow (Python image library)&lt;/li&gt;
&lt;li&gt;Saves an optimized version back to the bucket with the prefix optimized-&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is all done in a serverless fashion — no EC2, no containers, no headache.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambda — Python 3.11 function to handle the image compression&lt;/li&gt;
&lt;li&gt;Amazon S3 — Triggers Lambda on image uploads&lt;/li&gt;
&lt;li&gt;IAM Roles — To grant minimal and secure permissions&lt;/li&gt;
&lt;li&gt;Pillow — For optimizing images in Python&lt;/li&gt;
&lt;li&gt;Shell Scripts — To deploy and clean up resources easily&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📁 Project Structure
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;serverless-image-optimizer/
├── lambda/
│   └── handler.py             # Main Lambda code
├── deploy/
│   ├── create_resources.sh    # Setup script
│   ├── delete_resources.sh    # Teardown script
│   └── trust-policy.json      # IAM trust policy
├── test-images/               # Sample images
├── requirements.txt           # Pillow dependency
└── README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🚀 How It Works
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Upload an image (.jpg, .png) to your S3 bucket.&lt;/li&gt;
&lt;li&gt;Lambda automatically gets triggered.&lt;/li&gt;
&lt;li&gt;It compresses and saves the new image as optimized-filename.jpg.&lt;/li&gt;
&lt;li&gt;Simple. Fast. Scalable. Free-tier friendly.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🧯 Avoiding AWS Charges
&lt;/h2&gt;

&lt;p&gt;Cloud costs can creep up if you’re not careful. This project is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ 100% AWS Free Tier compatible&lt;/li&gt;
&lt;li&gt;✅ No long-running services (like EC2)&lt;/li&gt;
&lt;li&gt;✅ Includes a delete_resources.sh teardown script&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Run this when you’re done:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;bash deploy/delete_resources.sh&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  📂 GitHub Repo
&lt;/h2&gt;

&lt;p&gt;👉 &lt;a href="https://github.com/KaustavDey357/serverless-image-optimizer" rel="noopener noreferrer"&gt;View Full Code on GitHub&lt;/a&gt;&lt;br&gt;
Includes full setup, teardown, and README documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  📈 Who Should Use or Learn from This?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Cloud beginners looking to practice Lambda + S3 triggers&lt;/li&gt;
&lt;li&gt;Indie hackers automating small tasks&lt;/li&gt;
&lt;li&gt;DevOps learners building their portfolio&lt;/li&gt;
&lt;li&gt;Freelancers wanting cost-safe infrastructure automation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🧠 What I Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;How to securely connect Lambda to S3&lt;/li&gt;
&lt;li&gt;Optimizing Lambda deployment with zipped packages&lt;/li&gt;
&lt;li&gt;Using IAM roles with least-privilege principle&lt;/li&gt;
&lt;li&gt;Keeping projects within the Free Tier to avoid billing issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  💬 Feedback?
&lt;/h2&gt;

&lt;p&gt;Have suggestions? Want to collaborate?&lt;br&gt;
I’m open to feedback, pull requests, or just geeking out over serverless ideas!&lt;/p&gt;

&lt;h2&gt;
  
  
  📢 Let’s Connect
&lt;/h2&gt;

&lt;p&gt;If you’re a solo developer, early-stage startup, or indie hacker looking to automate your AWS infra, I’m offering free 30-min consultations.&lt;br&gt;
DM me on LinkedIn or leave a comment!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
      <category>python</category>
    </item>
    <item>
      <title>How I Provisioned Scalable AWS Infrastructure with Terraform and Load Balancer</title>
      <dc:creator>Kaustav Dey</dc:creator>
      <pubDate>Thu, 15 May 2025 09:22:04 +0000</pubDate>
      <link>https://dev.to/kaustav_dey_/how-i-provisioned-scalable-aws-infrastructure-with-terraform-and-load-balancer-4n5g</link>
      <guid>https://dev.to/kaustav_dey_/how-i-provisioned-scalable-aws-infrastructure-with-terraform-and-load-balancer-4n5g</guid>
      <description>&lt;h2&gt;
  
  
  &lt;a href="https://github.com/KaustavDey357/Terraform-AWS-EC2-Load-Balancer-Deployment" rel="noopener noreferrer"&gt;The Git Repo&lt;/a&gt; 
&lt;/h2&gt;

&lt;p&gt;As I deepened my understanding of DevOps and cloud fundamentals, I wanted to get hands-on with provisioning infrastructure the right way: using Infrastructure as Code (IaC). In this post, I’ll walk through how I built a reusable Terraform project to provision an EC2 instance, attach a load balancer, configure security groups, and get it all running on AWS.&lt;/p&gt;

&lt;p&gt;This project was part of my journey in creating modular and production-oriented DevOps blueprints.&lt;/p&gt;




&lt;h2&gt;
  
  
  🧰 Tools &amp;amp; Tech
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; for IaC&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS EC2&lt;/strong&gt; for compute resources&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Application Load Balancer (ALB)&lt;/strong&gt; for routing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Groups&lt;/strong&gt; for access control&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Remote backend&lt;/strong&gt; support with Terraform state files&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  📦 Project Structure
&lt;/h2&gt;

&lt;p&gt;I broke the configuration into reusable, modular Terraform components to make the codebase scalable and production-ready. Here's how the structure looked:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;.
├── main.tf           # Orchestrator: may call modules or glue everything together
├── variables.tf      # All variable declarations with types and descriptions
├── outputs.tf        # Output values (e.g., IPs, DNS names, ARNs)
├── vpc.tf            # VPC, subnets, internet gateway, etc.
├── ec2.tf            # EC2 instance(s), AMIs, key pairs, EBS volumes
├── alb.tf            # Application Load Balancer, listeners, target groups
├── security.tf       # Security groups, network ACLs, firewall rules
├── README.md         # Project documentation
├── .gitignore        # Files to exclude from Git (e.g., `.terraform`, `*.tfstate`)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each module encapsulates a piece of the infrastructure (e.g., EC2, security group), keeping things clean and reusable.&lt;/p&gt;




&lt;h2&gt;
  
  
  🏗️ What It Provisions
&lt;/h2&gt;

&lt;p&gt;When executed, the Terraform code provisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;public subnet&lt;/strong&gt; in a selected region&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;EC2 instance&lt;/strong&gt; with user data for bootstrapping&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;Security Group&lt;/strong&gt; that allows inbound traffic on ports 22 and 80&lt;/li&gt;
&lt;li&gt;An &lt;strong&gt;Application Load Balancer (ALB)&lt;/strong&gt; that distributes HTTP traffic&lt;/li&gt;
&lt;li&gt;Target group + listener configuration for the EC2 instance&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧪 How to Use It
&lt;/h2&gt;

&lt;p&gt;Clone the repo and run the following inside the root directory:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
terraform plan
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Make sure your AWS credentials are set in your environment or shared credentials file.&lt;/p&gt;

&lt;p&gt;Once applied, the EC2 instance and ALB will be up and running. You’ll get the public DNS of the load balancer in the Terraform output.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ Outcome
&lt;/h2&gt;

&lt;p&gt;With a single command, I spun up a complete production-grade architecture using Terraform. It’s scalable, reusable, and can easily be extended to include databases, autoscaling, and monitoring.&lt;/p&gt;

&lt;p&gt;The architecture diagram is: &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgcgqy4on5y0tgh49mic2.png" alt="Image description" width="800" height="800"&gt;
&lt;/h2&gt;

&lt;h2&gt;
  
  
  📌 Key Learnings
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Writing &lt;strong&gt;modular Terraform code&lt;/strong&gt; is essential for maintainability&lt;/li&gt;
&lt;li&gt;ALBs are ideal for HTTP/HTTPS workloads with flexible routing&lt;/li&gt;
&lt;li&gt;Outputs and variables improve reusability and flexibility&lt;/li&gt;
&lt;li&gt;Infrastructure automation saves time and reduces errors&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🧠 Next Steps
&lt;/h2&gt;

&lt;p&gt;I’m planning to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add support for &lt;strong&gt;private subnets&lt;/strong&gt; and NAT gateways&lt;/li&gt;
&lt;li&gt;Integrate with &lt;strong&gt;RDS&lt;/strong&gt; or &lt;strong&gt;DynamoDB&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Add &lt;strong&gt;Terraform Cloud&lt;/strong&gt; remote backend&lt;/li&gt;
&lt;li&gt;Extend this into a full production deployment pipeline&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔗 Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Repo: &lt;a href="https://github.com/KaustavDey357/Terraform-AWS-EC2-Load-Balancer-Deployment" rel="noopener noreferrer"&gt;Terraform AWS EC2 Load Balancer Deployment&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ☎️ Let's Connect
&lt;/h2&gt;

&lt;p&gt;If you're building something cloud-native or want help setting up secure AWS infrastructure, I’d love to chat.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Email:&lt;/strong&gt; &lt;a href="//mailto:deykaustav357@gmail.com"&gt;deykaustav357@gmail.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LinkedIn:&lt;/strong&gt; &lt;a href="https://www.linkedin.com/in/kaustav-dey-107593244" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/kaustav-dey-107593244&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Portfolio:&lt;/strong&gt; &lt;a href="https://kaustavdey357.github.io/" rel="noopener noreferrer"&gt;Kaustav Dey
DevOps &amp;amp; Cloud Specialist | AWS • Docker • Terraform • CI/CD&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>devops</category>
      <category>infrastructureascode</category>
    </item>
    <item>
      <title>How I Deployed a Dockerized App to AWS Using Terraform and GitHub Actions</title>
      <dc:creator>Kaustav Dey</dc:creator>
      <pubDate>Thu, 15 May 2025 08:55:36 +0000</pubDate>
      <link>https://dev.to/kaustav_dey_/how-i-deployed-a-dockerized-app-to-aws-using-terraform-and-github-actions-3nhg</link>
      <guid>https://dev.to/kaustav_dey_/how-i-deployed-a-dockerized-app-to-aws-using-terraform-and-github-actions-3nhg</guid>
      <description>&lt;p&gt;&lt;a href="https://github.com/KaustavDey357/ci-cd-pipeline-aws" rel="noopener noreferrer"&gt;The Git Repo &lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a DevOps and cloud enthusiast, I wanted to go beyond theoretical knowledge and create something real. In this post, I’ll walk through how I provisioned infrastructure on AWS using Terraform and deployed a Dockerized app using GitHub Actions CI/CD. This project helped me apply core DevOps practices and build a production-ready pipeline from scratch.&lt;/p&gt;




&lt;h2&gt;
  
  
  🛠️ Stack Overview
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; for infrastructure provisioning
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EC2&lt;/strong&gt; for hosting the app
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt; to containerize the application
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Actions&lt;/strong&gt; for CI/CD
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SCP + SSH&lt;/strong&gt; for deployment automation
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  1️⃣ Infrastructure as Code with Terraform
&lt;/h2&gt;

&lt;p&gt;I started by defining all infrastructure in Terraform to make deployments consistent and version-controlled. The configuration included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An &lt;strong&gt;EC2 instance&lt;/strong&gt; with a public IP
&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;security group&lt;/strong&gt; allowing SSH (22), HTTP (80), and app port (4000)
&lt;/li&gt;
&lt;li&gt;Optional &lt;strong&gt;Elastic Load Balancer&lt;/strong&gt; for scalability
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With reusable modules, I was able to spin up the entire environment with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;terraform init
terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  2️⃣ Dockerizing the App
&lt;/h2&gt;

&lt;p&gt;I deployed a simple Node.js app and containerized it using the following &lt;code&gt;Dockerfile&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; node:18-alpine&lt;/span&gt;
&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; . .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt;
&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 4000&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["node", "app.js"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Containerization ensured consistent behavior across dev and prod.&lt;/p&gt;




&lt;h2&gt;
  
  
  3️⃣ CI/CD with GitHub Actions
&lt;/h2&gt;

&lt;p&gt;To automate deployment, I set up a GitHub Actions pipeline triggered on every push to &lt;code&gt;main&lt;/code&gt;. It:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSHs into the EC2 instance&lt;/li&gt;
&lt;li&gt;Sends code using &lt;code&gt;scp&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Builds and runs a Docker container remotely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s a simplified snippet of the &lt;code&gt;deploy.yml&lt;/code&gt; workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: Deploy to EC2

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Set up SSH
        run: |
          echo "${{ secrets.EC2_SSH_KEY }}" &amp;gt; key.pem
          chmod 600 key.pem

      - name: Deploy App
        run: |
          scp -i key.pem -r . ubuntu@${{ secrets.EC2_HOST }}:/home/ubuntu/app
          ssh -i key.pem ubuntu@${{ secrets.EC2_HOST }} '
            cd app &amp;amp;&amp;amp;
            docker build -t myapp . &amp;amp;&amp;amp;
            docker stop myapp || true &amp;amp;&amp;amp;
            docker rm myapp || true &amp;amp;&amp;amp;
            docker run -d -p 4000:4000 --name myapp myapp

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All secrets like the SSH key and host IP are stored securely in GitHub.&lt;/p&gt;




&lt;h2&gt;
  
  
  ✅ Results
&lt;/h2&gt;

&lt;p&gt;Once deployed, the app was accessible at:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;http://&amp;lt;EC2_PUBLIC_IP&amp;gt;:4000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each push to GitHub automatically redeploys the latest code, saving time and ensuring consistency.&lt;/p&gt;




&lt;h2&gt;
  
  
  💡 Lessons Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Terraform&lt;/strong&gt; makes infra reproducible and scalable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker&lt;/strong&gt; simplifies app deployment&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Actions&lt;/strong&gt; is powerful and integrates directly with your codebase&lt;/li&gt;
&lt;li&gt;Keeping sensitive values in &lt;code&gt;.env&lt;/code&gt; and &lt;code&gt;.env.example&lt;/code&gt; is best practice&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔭 What's Next?
&lt;/h2&gt;

&lt;p&gt;I’m planning to explore:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Monitoring with CloudWatch&lt;/li&gt;
&lt;li&gt;Auto-scaling groups&lt;/li&gt;
&lt;li&gt;Using ECS or Fargate for container orchestration&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  🔗 Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;GitHub Repo: &lt;a href="https://github.com/KaustavDey357/ci-cd-aws-docker" rel="noopener noreferrer"&gt;ci-cd-pipeline-aws&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Live Demo: &lt;code&gt;http://&amp;lt;your-ec2-ip&amp;gt;:4000&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  ☎️ Let's Connect
&lt;/h2&gt;

&lt;p&gt;If you're a solo dev or early-stage founder looking to deploy to AWS with CI/CD and infrastructure automation, I can help.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Email:&lt;/strong&gt; &lt;a href="//mailto:deykaustav357@gmail.com"&gt;deykaustav357@gmail.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LinkedIn:&lt;/strong&gt; &lt;a href="https://www.linkedin.com/in/kaustav-dey-107593244" rel="noopener noreferrer"&gt;https://www.linkedin.com/in/kaustav-dey-107593244&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Portfolio Website:&lt;/strong&gt; &lt;a href="https://kaustavdey357.github.io/" rel="noopener noreferrer"&gt; Kaustav Dey DevOps &amp;amp; Cloud Specialist | AWS • Docker • Terraform • CI/CDm&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>devops</category>
      <category>terraform</category>
      <category>cicd</category>
    </item>
    <item>
      <title>🖥️ No IDE? No Problem: How I Started an EC2 Instance Using AWS CloudShell and SSH’d in From the Terminal</title>
      <dc:creator>Kaustav Dey</dc:creator>
      <pubDate>Mon, 05 May 2025 07:00:49 +0000</pubDate>
      <link>https://dev.to/kaustav_dey_/no-ide-no-problem-how-i-started-an-ec2-instance-using-aws-cloudshell-and-sshd-in-from-3e39</link>
      <guid>https://dev.to/kaustav_dey_/no-ide-no-problem-how-i-started-an-ec2-instance-using-aws-cloudshell-and-sshd-in-from-3e39</guid>
      <description>&lt;p&gt;There’s something empowering about spinning up your own virtual machine in the cloud — especially when you do it entirely from the command line.&lt;/p&gt;

&lt;p&gt;No clicking through tabs. No IDE. No download-this, install-that.&lt;/p&gt;

&lt;p&gt;Just pure terminal.&lt;/p&gt;

&lt;p&gt; Just you and the cloud.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;🌩️ Why I Tried AWS CloudShell&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As a cloud enthusiast (and someone who occasionally breaks things on my own machine), I decided to explore AWS CloudShell — a browser-based terminal environment that runs right inside the AWS Management Console. No need to install the AWS CLI or generate SSH keys on my laptop. It’s preloaded, persistent, and honestly, kind of amazing.&lt;/p&gt;

&lt;p&gt;I’d previously used EC2 through the UI, but this time, I challenged myself:&lt;/p&gt;

&lt;p&gt;“Can I launch and connect to an EC2 instance without ever leaving the terminal?”&lt;/p&gt;

&lt;p&gt;Spoiler: Yes. And here’s how I did it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚙️ Step 1: Opened CloudShell&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;From the AWS Console, I clicked on the little &lt;strong&gt;terminal icon&lt;/strong&gt; in the top-right — that’s CloudShell.&lt;/p&gt;

&lt;p&gt;Within seconds, I had a shell prompt, running in a pre-configured Amazon Linux environment, with full AWS CLI access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔐 Step 2: Created a Key Pair&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To connect via SSH later, I needed a key pair. From CloudShell, I ran:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 create-key-pair
  --key-name my-key 
  --query 'KeyMaterial' 
  --output text &amp;gt; my-key.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I set the right permissions:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;chmod 400 my-key.pem&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;✅ This generated a private key and saved it locally in CloudShell. No downloading needed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🚀 Step 3: Launched the EC2 Instance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I picked a simple Amazon Linux AMI and started a t2.micro instance like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 run-instances \
  --image-id ami-0abcdef1234567890 \  # Replace with a valid AMI ID in your region
  --count 1 \
  --instance-type t2.micro \
  --key-name my-key \
  --security-groups default
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Want a specific region? Add --region us-east-1 or whatever suits your setup.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I noted the returned &lt;code&gt;_InstanceId _&lt;/code&gt;because I’d need it soon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔍 Step 4: Waited for the Instance and Got the Public IP&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once the instance was running, I fetched its public IP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 describe-instances
  --filters "Name=instance-state-name,Values=running" 
  --query "Reservations[*].Instances[*].PublicIpAddress" 
  --output text
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This gave me something like:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;3.145.78.123&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;That’s my gateway into the cloud machine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔗 Step 5: SSH’d Into My Instance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Now came the satisfying part — actually connecting. Still in CloudShell, I ran:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ssh -i my-key.pem ec2-user@3.145.78.123&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And boom 💥 — I was in.&lt;/p&gt;

&lt;p&gt;A full-blown Linux VM, all mine, running in the cloud, controlled 100% from the browser-based terminal. No local setup. No downloads. Just me and my instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧼 Bonus: Cleaning Up&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When I was done experimenting, I cleaned up to avoid extra costs:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ec2 terminate-instances --instance-ids i-0abcd1234efgh5678&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And optionally deleted the key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 delete-key-pair --key-name my-key
rm my-key.pem
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;🧠 What I Learned&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS CloudShell is perfect for CLI-based workflows, even on a Chromebook or borrowed laptop.&lt;/li&gt;
&lt;li&gt;You don’t need a full IDE or even a configured local environment to manage cloud infrastructure.&lt;/li&gt;
&lt;li&gt;The AWS CLI is incredibly powerful (and feels like a cheat code once you get comfortable).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🎯 Final Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Starting an EC2 instance from the terminal might seem intimidating — but it’s actually elegant. Clean. Fast. And it gives you a deeper sense of control than the AWS GUI ever could.&lt;/p&gt;

&lt;p&gt;If you’ve been hesitant to touch the CLI, I highly recommend giving CloudShell a try. It’s like training wheels for power users.&lt;/p&gt;

&lt;p&gt;And once you SSH into your first cloud machine using nothing but the terminal?&lt;br&gt;
 You’ll never forget that feeling.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>cloudcomputing</category>
      <category>beginners</category>
    </item>
    <item>
      <title>🚀 From S3 to Speed: How I Supercharged My Static Website with AWS CloudFront</title>
      <dc:creator>Kaustav Dey</dc:creator>
      <pubDate>Sun, 04 May 2025 10:14:15 +0000</pubDate>
      <link>https://dev.to/kaustav_dey_/from-s3-to-speed-how-i-supercharged-my-static-website-with-aws-cloudfront-134</link>
      <guid>https://dev.to/kaustav_dey_/from-s3-to-speed-how-i-supercharged-my-static-website-with-aws-cloudfront-134</guid>
      <description>&lt;p&gt;In one of my previous posts, I walked you through how I hosted a basic static website on Amazon S3 using just two humble files: an index.html and an image. It was simple, effective, and honestly kind of magical seeing my HTML come to life on the web.&lt;br&gt;
But as with most things in tech, I got curious - what if I could make it faster? More secure? Professional-grade?&lt;br&gt;
That curiosity led me to CloudFront, AWS's content delivery network (CDN). And in this post, I'll show you how I transformed that basic site into a globally distributed, supercharged web experience - and learned a ton in the process.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;🧠 What Is CloudFront Anyway?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;CloudFront is like a worldwide relay race for your content. Instead of loading your site from one location (like S3 in a specific region), CloudFront caches it across edge locations around the world.&lt;br&gt;
So when someone visits your site, they're served content from the nearest AWS data center - not halfway across the globe. The result? Faster load times and a more responsive experience.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;💡 Why I Switched from Just S3 to CloudFront&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;When I first hosted my site on S3, I was thrilled to get it live. But I noticed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It was a bit slow to load when I shared it with friends abroad.&lt;/li&gt;
&lt;li&gt;There was no HTTPS support out-of-the-box for the S3 static site URL.&lt;/li&gt;
&lt;li&gt;I wanted my site to feel more real - like a proper website with a custom domain and secure connection.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CloudFront fixed all of that.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;🛠️ How I Did It - Step-by-Step&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1.Uploaded My Site to S3&lt;/strong&gt;&lt;br&gt;
If you missed my first post - I had a simple static website with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;index.html: a clean landing page.&lt;/li&gt;
&lt;li&gt;output.png: just a single image for now.
I uploaded both files to an S3 bucket and made them publicly readable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpr7jq3syk103022cs9jm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpr7jq3syk103022cs9jm.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Created a CloudFront Distribution&lt;/strong&gt;&lt;br&gt;
Now for the fun part.&lt;br&gt;
I went to the CloudFront console, clicked "Create Distribution," and selected Web as the delivery method.&lt;/p&gt;

&lt;p&gt;Here's how I set it up:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Origin Domain:&lt;/strong&gt; I selected my S3 bucket (make sure it's the static website hosting endpoint, not just the bucket URL).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5pkfbjq8hnfkq8f8uyo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5pkfbjq8hnfkq8f8uyo.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Viewer Protocol Policy:&lt;/strong&gt; I set it to Redirect HTTP to HTTPS to ensure secure access.&lt;/li&gt;
&lt;li&gt;Default Root Object: I entered index.html - this tells CloudFront what to load by default.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;(Optional) Set Up a Custom Domain and SSL&lt;/strong&gt;&lt;br&gt;
If you want your site to feel real, you can buy a domain and configure it with Route 53, AWS's DNS service.&lt;br&gt;
Using AWS Certificate Manager (ACM) you can request an SSL certificate for your domain and  then attache it to any CloudFront distribution so that your would load over HTTPS with a shiny padlock 🔒.I didnt have a domain so I skipped this step.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Waited for Propagation (and Grabbed Coffee)&lt;/strong&gt;&lt;br&gt;
CloudFront takes a few minutes to deploy a distribution - in my case, about 10 minutes. I used the time to check my GitHub stars and pretend I wasn't hitting refresh every 30 seconds.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljfianb57e0dtiqbxb4a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljfianb57e0dtiqbxb4a.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;🌐 The Moment of Truth&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Once deployed, I visited my CloudFront URL (or my custom domain), and there it was - my static site, loading faster than ever, over HTTPS, looking legit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc9co4d6nplhmft8rzmw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frc9co4d6nplhmft8rzmw.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It felt like leveling up. No longer just a student experiment or weekend project - it looked like something real companies use.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;🧼 Bonus: Invalidation &amp;amp; Updates&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;One thing I learned the hard way: CloudFront caches everything. So when I updated my HTML or image file, the changes didn't appear instantly.&lt;br&gt;
To fix that, I used &lt;strong&gt;Invalidations&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I went to the distribution, clicked Invalidations, and added /* to clear the cache.&lt;/li&gt;
&lt;li&gt;Next time I made updates, I used versioning (e.g., img-v2.png) to avoid repeated invalidations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;💸 Important Note&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;The first 1,000 invalidation paths per month are free&lt;/em&gt;&lt;/strong&gt;. After that, AWS charges per invalidation request - so don't overuse /* unless necessary.&lt;/li&gt;
&lt;li&gt;A smarter approach is to version your files (e.g., app-v2.js, style-v3.css) so you don't need to invalidate older files.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;🤓 What I Learned&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;HTTPS is essential for modern web apps (and CloudFront makes it easy).&lt;/li&gt;
&lt;li&gt;AWS has a learning curve, but once you get through the weeds, it's super powerful.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯 Final Thoughts
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
What started as a tiny index.html and an image has evolved into a full-blown global delivery setup with HTTPS and blazing performance. And honestly? It felt empowering.&lt;/p&gt;

&lt;p&gt;Whether you're building a personal site, a portfolio, or just experimenting - I highly recommend giving CloudFront a shot. You'll learn, improve your site's UX, and start thinking like a real-world web architect.&lt;/p&gt;

&lt;p&gt;And if I could figure it out with a two-file website… so can you.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>beginners</category>
      <category>cloud</category>
      <category>cloudfront</category>
    </item>
    <item>
      <title>🚀 How I Launched My First EC2 Instance on AWS (A Beginner's Guide)</title>
      <dc:creator>Kaustav Dey</dc:creator>
      <pubDate>Sun, 04 May 2025 10:03:19 +0000</pubDate>
      <link>https://dev.to/kaustav_dey_/how-i-launched-my-first-ec2-instance-on-aws-a-beginners-guide-1196</link>
      <guid>https://dev.to/kaustav_dey_/how-i-launched-my-first-ec2-instance-on-aws-a-beginners-guide-1196</guid>
      <description>&lt;p&gt;Cloud computing can feel intimidating at first - all the talk of regions, instances, VPCs, and security groups might seem like another language. But launching an EC2 instance on AWS? Surprisingly simple.&lt;br&gt;
In this post, I'll walk you through how I created a basic EC2 instance, what I learned along the way, and how you can do it too - even if you're using Windows with PuTTY to connect.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;🧠 What is an EC2 Instance?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Amazon EC2 (Elastic Compute Cloud) is a virtual server in Amazon's cloud. You can use it to host websites, run code, experiment with server setups, or even train ML models.&lt;br&gt;
Think of it like renting a computer in the cloud - you control everything about it.&lt;/p&gt;




&lt;p&gt;**&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ Step-by-Step: How I Did It
&lt;/h2&gt;

&lt;p&gt;**&lt;br&gt;
&lt;strong&gt;1. Logged into AWS Console&lt;/strong&gt;&lt;br&gt;
First, I signed in to the AWS Management Console. If you don't have an account, sign up at aws.amazon.com.&lt;br&gt;
&lt;strong&gt;2. Navigated to EC2 Dashboard&lt;/strong&gt;&lt;br&gt;
From the AWS Services menu, I selected EC2. This took me to the EC2 Dashboard, where I could manage instances, volumes, security groups, and more.&lt;br&gt;
&lt;strong&gt;3. Clicked "Launch Instance"&lt;/strong&gt;&lt;br&gt;
On the dashboard, there's a big blue button: Launch Instance. That's where it begins.&lt;br&gt;
&lt;strong&gt;4. Configured the Instance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's what I chose:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Name: my-first-ec2&lt;/li&gt;
&lt;li&gt;Amazon Machine Image (AMI): Amazon Linux 2 (free-tier eligible)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F444ytmcjkntivcw9xl05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F444ytmcjkntivcw9xl05.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Instance Type: t2.micro (1 vCPU, 1GB RAM – perfect for starters)&lt;/li&gt;
&lt;li&gt;Key Pair: I created a new key pair and downloaded the .pem file for SSH access.&lt;/li&gt;
&lt;li&gt;Network Settings: I allowed SSH (port 22) so I could log in remotely.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;5. Launched It&lt;/strong&gt;&lt;br&gt;
Clicked "Launch Instance" and waited ~30 seconds. Boom - my first cloud server was live!&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;🔒 Setting Up Security Groups (Allowing Inbound Traffic)&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;One of the most important steps was configuring inbound traffic via a security group. A security group acts like a virtual firewall, controlling traffic to and from your instance.&lt;/p&gt;

&lt;p&gt;For inbound rules, I allowed &lt;em&gt;SSH (port 22)&lt;/em&gt; from &lt;em&gt;my IP address&lt;/em&gt; only so I could connect to the instance securely via PuTTY. I also enabled &lt;em&gt;HTTP (port 80)&lt;/em&gt; and &lt;em&gt;HTTPS (port 443)&lt;/em&gt; from anywhere (0.0.0.0/0) to allow future web traffic to reach the server, in case I want to host a website or run a web app. This setup ensures I can access the server securely while also keeping the door open for standard web traffic.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklksq3yblfeaqt7wcvwg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklksq3yblfeaqt7wcvwg.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;💡 Why These?&lt;/strong&gt;&lt;br&gt;
SSH lets me connect to the instance remotely.&lt;br&gt;
HTTP &amp;amp; HTTPS are needed if I plan to host any kind of web content or server.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;⚠️ Tip:&lt;/strong&gt; Restrict SSH access to your IP for security._ Never leave port 22 open to the world unless absolutely necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  _
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;🚪 Launched the Instance&lt;/strong&gt;&lt;br&gt;
Once the security group was configured, I clicked Launch Instance and within 30 seconds, I had my cloud server running.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;🔐 Connecting to the EC2 Instance on Windows using PuTTY&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Since I was using a Windows system, I connected to my instance using PuTTY, a free SSH client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here's what I did:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Converted the .pem file to .ppk:&lt;/li&gt;
&lt;li&gt;Downloaded and opened PuTTYgen.&lt;/li&gt;
&lt;li&gt;Clicked Load, selected the .pem file.&lt;/li&gt;
&lt;li&gt;Clicked Save private key and saved the file as my-key.ppk.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Opened PuTTY and entered connection details:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Host Name (or IP address): ec2-user@&lt;/li&gt;
&lt;li&gt;In the left menu, under Connection &amp;gt; SSH &amp;gt; Auth, I browsed to the my-key.ppk file.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Connected:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Clicked Open and accepted the SSH security prompt.&lt;/li&gt;
&lt;li&gt;I was now logged into my EC2 instance via the command line!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fucv52e33y6ilphintra0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fucv52e33y6ilphintra0.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;🧼 Cleaning Up (IMPORTANT!)&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;After testing, I stopped and terminated the instance to avoid unexpected charges. Always shut down resources you're not using.&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;strong&gt;💡 What I Learned&lt;/strong&gt;
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Cloud isn't as scary as it seems.&lt;/li&gt;
&lt;li&gt;AWS gives you a lot of control - which means you must manage it responsibly.&lt;/li&gt;
&lt;li&gt;On Windows, PuTTY and PuTTYgen are your best friends for connecting via SSH.&lt;/li&gt;
&lt;li&gt;Always save your private key file - and never share it.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  5. 
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;🧭 What's Next?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Hosting a static website using EC2.(Wair, alreday did that , check it out :&lt;a href="https://medium.com/@deykaustav357/how-i-made-my-first-static-website-using-amazon-s3-as-an-aws-beginner-476a9232f360" rel="noopener noreferrer"&gt;https://medium.com/@deykaustav357/how-i-made-my-first-static-website-using-amazon-s3-as-an-aws-beginner-476a9232f360&lt;/a&gt; )&lt;/li&gt;
&lt;li&gt;Attaching an EBS volume for persistent storage.&lt;/li&gt;
&lt;li&gt;Learning about AMIs and how to create custom images.&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;strong&gt;🎯 Final Thoughts&lt;/strong&gt;&lt;br&gt;
Launching an EC2 instance is like a rite of passage in the cloud world. It's the gateway to a huge universe of cloud computing possibilities.&lt;br&gt;
If you've been hesitant, just try it - AWS's free tier gives you 750 hours/month of t2.micro usage. Play, break things, and learn.&lt;/p&gt;

&lt;p&gt;If I can do it, so can you.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>ec2</category>
      <category>beginners</category>
    </item>
    <item>
      <title>I Made My First AWS Lambda Function (And It Did… Absolutely Nothing)</title>
      <dc:creator>Kaustav Dey</dc:creator>
      <pubDate>Sun, 04 May 2025 10:02:57 +0000</pubDate>
      <link>https://dev.to/kaustav_dey_/i-made-my-first-aws-lambda-function-and-it-did-absolutely-nothing-194f</link>
      <guid>https://dev.to/kaustav_dey_/i-made-my-first-aws-lambda-function-and-it-did-absolutely-nothing-194f</guid>
      <description>&lt;p&gt;If you're anything like me, you've probably seen those buzzwords floating around: "serverless," "Lambda functions," "event-driven computing." It sounds futuristic - like programming without writing code, or spinning up logic that just exists in the cloud.&lt;br&gt;
So naturally, I wanted in.&lt;br&gt;
A few days ago, I set out to make my first AWS Lambda function. I wasn't trying to build anything crazy. I just wanted to see if I could wire something together and watch it work. Spoiler: I did. My Lambda didn't do much, but I saw it work - and that made all the difference.&lt;br&gt;
Here's how it went.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;The Plan: Trigger a Lambda From an S3 Upload&lt;/strong&gt;&lt;br&gt;
I came up with the most basic idea possible:&lt;/p&gt;
&lt;h2&gt;
  
  
  &lt;strong&gt;Upload a file to S3 → have that action trigger a Lambda function&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;I wasn't worried about the function doing anything useful. I just wanted to see the gears turn: &lt;strong&gt;&lt;em&gt;trigger → function → log.&lt;/em&gt;&lt;/strong&gt;&lt;br&gt;
You could say it was a science experiment. The goal wasn't utility. It was proof.🤯&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Step 1: Creating the Lambda Function (AKA Blank Canvas Mode)&lt;/strong&gt;&lt;br&gt;
I opened the AWS Lambda console and clicked "Create Function." I chose the "Author from scratch" option because I like pain, apparently.&lt;br&gt;
Name: demoLambda&lt;br&gt;
Runtime: Python 3.12 (I don't even know Python that well, but it looked friendly)&lt;br&gt;
Permissions: I let AWS create a new role with basic Lambda permissions. Default everything&lt;/p&gt;

&lt;p&gt;Inside the function editor, AWS gave me a basic "Hello from Lambda" handler. I could've edited it, but I decided: Nope, let's leave it untouched. I wasn't here to code. I was here to observe.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hrtvg5lc2f2vyrm56wc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hrtvg5lc2f2vyrm56wc.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Setting Up the S3 Bucket (My Cloud Storage Playground)&lt;/strong&gt;&lt;br&gt;
Next, I created a new S3 bucket:&lt;br&gt;
Name: demobucket357 (because AWS hates duplicate names)&lt;br&gt;
Region: Same as the Lambda function, which matters more than I expected&lt;/p&gt;

&lt;p&gt;I left all other settings default. Again, this wasn't about security best practices or storage classes. I just wanted something I could drop files into.&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Step 3: Connecting the Trigger (Where Things Got Real)&lt;/strong&gt;&lt;br&gt;
Back in my Lambda function settings, I added a trigger:&lt;br&gt;
Trigger type: S3&lt;br&gt;
Bucket: My newly created bucket&lt;br&gt;
Event type: PUT (aka when an object is uploaded)&lt;/p&gt;

&lt;p&gt;I also checked the box that acknowledged AWS might need to grant permissions for this to work. Clicked save. No errors. So far, so good.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkdwocdv3r0nd01pleah.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftkdwocdv3r0nd01pleah.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;&lt;strong&gt;Step 4: Uploading a File (And Waiting for Magic)&lt;/strong&gt;&lt;br&gt;
With everything wired up, I uploaded a random .txt file into my S3 bucket. No fancy automation. Just a manual upload from my desktop.&lt;br&gt;
Then I opened CloudWatch Logs like I was checking a mailbox for a letter from a pen pal.&lt;br&gt;
Boom. There it was.&lt;br&gt;
A new log stream had appeared. Inside it? A sweet, sweet log that said something like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;START RequestId: ...
Hello from Lambda
END RequestId: ...
REPORT RequestId: ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I didn't write any custom code. I didn't even print anything. But that log was enough to tell me: it worked.&lt;br&gt;
I had created a real-life serverless event. I had connected cloud services and &lt;em&gt;watched one respond to the other&lt;/em&gt;. It was kind of beautiful.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What I Learned (Besides Patience)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1. - Triggers are powerful. Even without doing anything inside Lambda, you can still observe and verify behavior through logs.&lt;/li&gt;
&lt;li&gt;2. - Permissions matter. AWS handles a lot, but I started reading more about IAM roles and policies afterward - just so I don't blindly click checkboxes forever.&lt;/li&gt;
&lt;li&gt;3. - CloudWatch is your friend. Seeing logs appear felt like watching a heartbeat monitor for your cloud app.&lt;/li&gt;
&lt;li&gt;4. - You don't need to do much to learn a lot. The actual function was a placeholder. The real learning came from connecting the dots.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br&gt;
This wasn't a groundbreaking project. I didn't save data, transform files, or build an API. But I started. I connected services. I watched logs come in. I got curious about what I could do next.&lt;br&gt;
That's the thing about learning cloud: you don't always have to aim for big, shiny results. Sometimes it's about getting your hands dirty and proving that, yes, you can make the cloud respond to you.&lt;br&gt;
And honestly? That's more addictive than it should be.&lt;/p&gt;




&lt;p&gt;Thanks for reading. If you're tinkering with Lambda or S3, I'd love to hear what you're building - or breaking. Either way, you're learning.&lt;/p&gt;

</description>
      <category>lambda</category>
      <category>aws</category>
      <category>cloud</category>
      <category>beginners</category>
    </item>
    <item>
      <title>How I Built My First VPC with 6 Subnets (And Only Mildly Panicked)</title>
      <dc:creator>Kaustav Dey</dc:creator>
      <pubDate>Sun, 04 May 2025 08:01:33 +0000</pubDate>
      <link>https://dev.to/kaustav_dey_/how-i-built-my-first-vpc-with-6-subnets-and-only-mildly-panicked-4op3</link>
      <guid>https://dev.to/kaustav_dey_/how-i-built-my-first-vpc-with-6-subnets-and-only-mildly-panicked-4op3</guid>
      <description>&lt;p&gt;&lt;strong&gt;Real talk&lt;/strong&gt;: When I first started poking around in AWS, I wasn’t trying to build anything fancy. I just wanted to run a small app and maybe learn a thing or two along the way. But as I clicked through the endless maze of services, acronyms, and dropdown menus, one word kept showing up: VPC.&lt;/p&gt;

&lt;p&gt;At first, I ignored it. It sounded complicated. Scary, even. Then I ran into issues — couldn’t access my instance, secure my database, or figure out where my traffic was going. That’s when I realized: I needed to understand how the networking stuff works.&lt;/p&gt;

&lt;p&gt;So I decided to build my own Virtual Private Cloud — fully from scratch, with six subnets (yep, six), even though I had no clue what I was doing at the start. If you’re new to AWS and feeling overwhelmed, trust me: I’ve been there. Here’s exactly how I figured it out — and what I wish someone had told me before I started.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Why Even Build a Custom VPC?&lt;/strong&gt;&lt;br&gt;
Like a lot of folks, I started with AWS’s default VPC. It was fine — until it wasn’t. I thought of a setup with more structure:&lt;/p&gt;

&lt;p&gt;Public subnets for web-facing services&lt;br&gt;
Private subnets for databases and internal logic&lt;br&gt;
Multiple Availability Zones for high availability&lt;br&gt;
So I said screw it — time to build a custom VPC with 6 subnets: 3 public, 3 private, spread across 3 Availability Zones (AZs).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: VPC Basics (Without Falling Asleep)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fleydg0k0ooezui308z2g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fleydg0k0ooezui308z2g.png" alt="Image description" width="720" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I created a new VPC with the CIDR block 10.0.0.0/16—plenty of room for subnets. The plan:&lt;/p&gt;

&lt;p&gt;Subnet Type CIDR Block AZ Public A 10.0.1.0/24 us-east-1a Public B 10.0.2.0/24 us-east-1b Public C 10.0.3.0/24 us-east-1c Private A 10.0.101.0/24 us-east-1a Private B 10.0.102.0/24 us-east-1b Private C 10.0.103.0/24 us-east-1c&lt;/p&gt;

&lt;p&gt;I wanted each pair of public/private subnets to live in its own AZ, so if one zone went down, things would still hum along elsewhere.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Routing Headaches&lt;/strong&gt;&lt;br&gt;
Once I had the subnets created, the real fun began: route tables.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Public Route Table (1):&lt;/strong&gt;&lt;/em&gt; Shared across all public subnets. It included a route to the Internet Gateway (IGW).&lt;br&gt;
&lt;strong&gt;&lt;em&gt;Private Route Table (1):&lt;/em&gt;&lt;/strong&gt; Shared across all private subnets.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jkcq42xuhkt1zwrxbur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9jkcq42xuhkt1zwrxbur.png" alt="Image description" width="720" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lessons From the Trenches&lt;/strong&gt;&lt;br&gt;
This build took me longer than I’d care to admit. But here’s what I took away:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🧠 Networking is not black magic.&lt;/strong&gt;&lt;br&gt;
It feels that way at first, but once you understand how IGWs, NATs, and route tables work together, it clicks.(though I didnt use any)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📉 Simplicity is your friend.&lt;/strong&gt;&lt;br&gt;
Start with 1 public and 1 private subnet before scaling to 6. Trust me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📄 Draw it out.&lt;/strong&gt;&lt;br&gt;
Seriously. Use a napkin, whiteboard, or Lucidchart. Visualizing the subnets and routes made everything way easier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br&gt;
Was this overkill for a side project? Maybe. But building my own 6-subnet VPC taught me more about cloud networking than any course ever did. And now, spinning up secure, redundant infrastructure doesn’t feel intimidating — it feels empowering.&lt;/p&gt;

&lt;p&gt;If you’re on the fence about ditching the default VPC: do it. Break things. Rebuild them. Learn. It’s worth it.&lt;/p&gt;

&lt;p&gt;Got a VPC horror story or proud moment? I’d love to hear it. Let’s trade tales in the comments. 👇&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>vpc</category>
      <category>aws</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>How I Made My First Static Website Using Amazon S3 (As an AWS Beginner)</title>
      <dc:creator>Kaustav Dey</dc:creator>
      <pubDate>Wed, 23 Apr 2025 12:29:28 +0000</pubDate>
      <link>https://dev.to/kaustav_dey_/how-i-made-my-first-static-website-using-amazon-s3-as-an-aws-beginner-1ch5</link>
      <guid>https://dev.to/kaustav_dey_/how-i-made-my-first-static-website-using-amazon-s3-as-an-aws-beginner-1ch5</guid>
      <description>&lt;p&gt;Not too long ago, I decided to start learning AWS. It seemed like one of those things everyone in tech was talking about — but also kind of intimidating. There’s a ton of services, acronyms flying around, and a whole dashboard that feels like a cockpit. So I thought: Why not start small? Let’s just get a simple website live using one AWS service.&lt;/p&gt;

&lt;p&gt;That’s how I ended up making my very first static website using Amazon S3. If you’re also just getting into AWS, this post is for you.&lt;/p&gt;

&lt;p&gt;🧠 Why Amazon S3?&lt;br&gt;
Amazon S3 (Simple Storage Service) is basically cloud storage — you can upload files and access them from anywhere. But what I didn’t know at first is that you can actually host a static website with it. That means if your site is just HTML, CSS, and maybe some JavaScript — no backend stuff — you can host it on S3 without spinning up a server.&lt;/p&gt;

&lt;p&gt;It sounded perfect for a beginner like me. No DevOps. No EC2. No headaches.&lt;/p&gt;

&lt;p&gt;💻 Step 1: Building a Simple Website&lt;br&gt;
I kept it super minimal. No frameworks, just plain HTML and CSS. Here’s what I used:&lt;/p&gt;

&lt;p&gt;index.html&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html lang="en"&amp;gt;
&amp;lt;head&amp;gt;
  &amp;lt;meta charset="UTF-8"&amp;gt;
  &amp;lt;title&amp;gt;My First AWS Site&amp;lt;/title&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
  &amp;lt;h1&amp;gt;Hello, AWS!&amp;lt;/h1&amp;gt;
  &amp;lt;p&amp;gt;This is my first static site hosted on S3.&amp;lt;/p&amp;gt;
  &amp;lt;img src="output.jpg"&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;output.jpg&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmwovkm1s5sssh5bv7smh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmwovkm1s5sssh5bv7smh.jpg" alt="Image description" width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That’s it. Two files. Just enough to test the process.&lt;/p&gt;

&lt;p&gt;☁️ Step 2: Creating the S3 Bucket&lt;br&gt;
Here’s where the AWS part begins:&lt;/p&gt;

&lt;p&gt;Logged into the AWS Management Console.&lt;br&gt;
Searched for S3 and clicked Create bucket.&lt;br&gt;
Gave it a unique name (S3 bucket names are globally unique).&lt;br&gt;
Unchecked “Block all public access” (important! Otherwise, no one can view your site).&lt;br&gt;
Then I added a policy to only ensure my s3 object could read the stuff stored in it.&lt;br&gt;
Created the bucket.&lt;br&gt;
The policy I used is below. Just place the unique name of your bucket in ‘Bucket-Name’&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
     {
         "Sid": "PublicReadGetObject",
         "Effect": "Allow",
         "Principal": "*",
         "Action": [
             "s3:GetObject"
         ],
         "Resource": [
                "arn:aws:s3:::Bucket-Name/*"
         ]
     }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, I had cloud storage ready to go.&lt;/p&gt;

&lt;p&gt;🌐 Step 3: Enabling Static Website Hosting&lt;br&gt;
Inside the S3 bucket:&lt;/p&gt;

&lt;p&gt;I clicked on the Properties tab.&lt;br&gt;
Scrolled down to Static website hosting.&lt;br&gt;
Enabled it and set index.html as the index document.&lt;br&gt;
AWS gave me a URL. This would be the live address of my site.&lt;br&gt;
But first, I had to upload the files and make sure they were public.&lt;/p&gt;

&lt;p&gt;📤 Step 4: Uploading and Making Files Public&lt;br&gt;
Went to the Objects tab.&lt;br&gt;
Uploaded both index.html and style.css.&lt;br&gt;
After uploading, I selected each file, clicked Actions &amp;gt; Make public.&lt;br&gt;
(Note: You can set a bucket policy to make everything public automatically, butif you want you can stick with manual for now — it felt safer while learning.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdfpqk4r0jau0p90mep2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdfpqk4r0jau0p90mep2.png" alt="Image description" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔗 Step 5: Visiting My Site&lt;br&gt;
With everything uploaded and public, I went back to the static hosting section in Properties and copied the Endpoint URL.&lt;/p&gt;

&lt;p&gt;I pasted it into my browser, and boom — it worked. My little website was online. No servers, no deployment tools. Just files in a bucket.&lt;/p&gt;

&lt;p&gt;🔍 What I Learned&lt;br&gt;
AWS can feel like a lot, but starting with one service (like S3) makes it manageable.&lt;br&gt;
Hosting a static site doesn’t need to be complicated.&lt;br&gt;
Public access settings are a key part of getting S3 hosting to work.&lt;br&gt;
The feeling of getting a website live — even a basic one — is honestly really satisfying.&lt;br&gt;
✅ Final Thoughts&lt;br&gt;
This was my first actual hands-on experience with AWS, and it went smoother than I expected. Sure, I had to Google a few things (especially around permissions), but I came away feeling a little more confident in navigating the AWS ecosystem.&lt;/p&gt;

&lt;p&gt;If you’re new to AWS and wondering where to begin — host a static website on S3. It’s simple, useful, and gives you a nice win early on.&lt;/p&gt;

&lt;p&gt;Next up for me? Maybe setting up a custom domain, adding HTTPS with CloudFront, or playing with Lambda. But for now, I’m just happy that I took the first step.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>s3</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Diving into the World of AWS: My Excitement and Journey Ahead</title>
      <dc:creator>Kaustav Dey</dc:creator>
      <pubDate>Wed, 23 Apr 2025 11:32:17 +0000</pubDate>
      <link>https://dev.to/kaustav_dey_/diving-into-the-world-of-aws-my-excitement-and-journey-ahead-2oeh</link>
      <guid>https://dev.to/kaustav_dey_/diving-into-the-world-of-aws-my-excitement-and-journey-ahead-2oeh</guid>
      <description>&lt;p&gt;Given the rate and speed with which technology is evolving it is important to keep it in perspective. The area which has particularly interested me is the one of cloud computing, more specifically, the one known as Amazon Web Services. There’s so much about this vast world of AWS that I am thrilled to explore. This is how I feel and this is what I hope to accomplish.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#### The AWS Ecosystem&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS is a dominant force with a wide range of services that address almost every facet of cloud computing. The range and capabilities of the offerings, which range from databases and storage to machine learning and analytics, are remarkable. I can see how becoming proficient with AWS could greatly improve my abilities and lead to exciting career opportunities in the tech sector.&lt;/p&gt;

&lt;p&gt;The practical use of AWS services is what most excites me. For instance, knowing how to use Amazon S3 for effective storage solutions or AWS Lambda to deploy scalable apps could revolutionise my approach to projects. Working with strong tools like AWS CloudFormation and AWS EC2 is exciting because these services offer flexibility and automation that can improve any tech project.&lt;br&gt;
**&lt;/p&gt;

&lt;h4&gt;
  
  
  Learning and Growth**
&lt;/h4&gt;

&lt;p&gt;Starting this AWS adventure also entails starting a journey of ongoing education. The abundance of resources, ranging from AWS’s documentation to online tutorials and courses, offers an excellent place to start. I’ve already enrolled in a couple of introductory classes, and I’m eager to get my hands dirty in labs that mimic actual cloud problems.&lt;/p&gt;

&lt;p&gt;I’m especially excited to get my hands dirty with AWS certification courses. Obtaining these credentials would increase my confidence in addition to validating my knowledge. Since it appears to cover a wide range of fundamental knowledge that I can build upon, the AWS Certified Solutions Architect — Associate is the first test I have my sights set on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#### The Community&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The lively community that surrounds AWS is one of the things that excites me about learning more about it. Professionals and other students can provide invaluable support. I’m excited to participate in forums, go to meetups, and have conversations on sites like Stack Overflow and Reddit. There is so much to learn and share, and as I become more knowledgeable, I can’t wait to add my perspectives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#### Practical Application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I intend to work on personal projects that will enable me to put what I’ve learnt about AWS into practice as I gain more knowledge about it. I’m excited to see how these services can make ideas a reality, whether it’s using Amazon RDS to manage a database for a project or EC2 to host a personal website. One of the primary reasons I’m so committed to this journey is the opportunity to innovate and create with AWS.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;#### Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In conclusion, I am excited to explore the world of AWS because of the numerous opportunities it offers for education, development, and real-world application. Since the tech sector is always changing, I think becoming proficient in AWS will improve my skill set and put me in a position to take advantage of new opportunities. I’m prepared to welcome the difficulties and rejoice in the upcoming achievements as I set out on this adventure. Cheers to the beginning of an exciting journey!&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>cloudcomputing</category>
      <category>aws</category>
      <category>beginners</category>
    </item>
  </channel>
</rss>
