DEV Community

Cover image for From EC2 Servers to Serverless Hosting: Learning Amazon S3, IAM & AWS CLI (Day 4)
Avinash wagh
Avinash wagh

Posted on

From EC2 Servers to Serverless Hosting: Learning Amazon S3, IAM & AWS CLI (Day 4)

When I started learning cloud engineering, I initially thought cloud deployment meant launching servers.

In my previous article, I deployed a web application on a Linux server using Amazon EC2 and Nginx.

That was a huge milestone.

But today I learned something even more powerful.

In the cloud, sometimes you don’t need a server at all.

Instead, you can host applications using object storage.

Today’s focus was learning Amazon S3, understanding Identity and Access Management, and interacting with AWS services using the command line.

And it completely changed how I see infrastructure.

πŸš€ The Objective

Today’s goal was simple but important:

  • Understand how cloud storage works
  • Learn user and permission management
  • Deploy a static website without a server
  • Interact with AWS using the CLI

This meant working with three important AWS components:

- Amazon S3
- AWS Identity and Access Management
- AWS Command Line Interface

πŸ“¦ Step 1: Understanding Amazon S3

First, I learned about Amazon S3 (Simple Storage Service).

It is an object storage service used to store files in the cloud.

Instead of folders and files like a normal system, S3 stores data as:

- Buckets β†’ Containers for storage
- Objects β†’ Files inside buckets

So the structure looks like:

Bucket
β”œβ”€β”€ index.html
β”œβ”€β”€ error.html
β”œβ”€β”€ images/
└── css/

Buckets are globally unique and act like storage containers for applications, backups, media files, and static websites.

πŸͺ£ Step 2: Creating and Managing an S3 Bucket

After understanding the concept, I created my own S3 bucket.

Steps I followed:

  1. Opened the AWS Console
  2. Navigated to Amazon S3
  3. Created a new bucket
  4. Selected the Mumbai region
  5. Configured permissions
  6. Enabled Static Website Hosting

Inside the bucket, I uploaded:

  • index.html
  • error.html
  • project assets

Once the files were uploaded, I enabled static website hosting and AWS generated a public website endpoint.

My website was now running directly from S3 storage.

No server required.

🌐 Step 3: Hosting a Static Website using S3

Unlike my EC2 deployment, this approach had a big difference:

There was no Linux server, no Nginx, and no SSH.

Instead:

  • The files live inside S3
  • AWS automatically serves them over HTTP
  • The bucket behaves like a web server

This showed me a very important cloud concept:

Not every application requires a virtual machine.

Sometimes storage + HTTP delivery is enough.

πŸ” Step 4: Learning IAM – Identity & Access Management

Next, I explored AWS Identity and Access Management.

IAM controls who can access AWS resources and what actions they can perform.

Instead of sharing the root account, best practice is to create IAM users with limited permissions.

Key IAM components I learned:

Users

Individual identities created for developers or services.

Groups

A collection of users sharing the same permissions.

Example:

Developers Group
β”œβ”€β”€ avinash-dev
β”œβ”€β”€ sagar-dev

Roles

Temporary access permissions used by services like EC2.

Policies

JSON documents that define permissions.

Example policy action:

s3:PutObject
s3:GetObject
s3:ListBucket

Identity Providers

Used for federated access like Google login or corporate SSO.

This structure makes AWS secure and scalable for teams.

πŸ’» Step 5: Installing and Configuring AWS CLI

Today I also started working with the AWS Command Line Interface.

Instead of using the AWS console, CLI allows interaction with AWS services directly from the terminal.

First I installed the AWS CLI and configured credentials:

aws configure

It required:

  • Access Key
  • Secret Key
  • Region
  • Output format

After configuration, my terminal could interact with AWS services.

πŸ“‚ Step 6: Managing S3 Using AWS CLI

Using the CLI, I practiced several commands.

List buckets
aws s3 ls

Upload a file

aws s3 cp index.html s3://cloudcanvas-editor-h-bucket/

Upload error page

aws s3 cp error.html s3://cloudcanvas-editor-h-bucket/

These commands allowed me to manage cloud storage directly from my terminal.

It felt similar to using Linux commands, but now I was interacting with cloud infrastructure.

🧠 Key Technical Takeaways

Today’s learning helped me understand several important cloud concepts:

  • How Amazon S3 stores objects instead of traditional filesystems
  • Difference between server-based hosting and serverless hosting
  • Why IAM is critical for security
  • How policies control resource permissions
  • How the AWS Command Line Interface enables automation
  • How to upload and manage files in S3 from the terminal

Most importantly, I realized something powerful:

Cloud engineering is not only about servers.

It is about choosing the right service for the right problem.

🎯 Reflection

Just two days ago, my applications were running on localhost.

In my previous article, I deployed one on Amazon EC2 using Nginx.

Today, I hosted one without any server using Amazon S3.

That contrast helped me understand something important about cloud architecture:

Infrastructure choices matter.

Sometimes you scale with servers.

Sometimes you scale without them.

And understanding both approaches is what makes cloud engineering powerful.

This is Day 4 of my cloud journey.

More learning ahead πŸš€

Top comments (0)