When I started learning cloud engineering, I initially thought cloud deployment meant launching servers.
In my previous article, I deployed a web application on a Linux server using Amazon EC2 and Nginx.
That was a huge milestone.
But today I learned something even more powerful.
In the cloud, sometimes you donβt need a server at all.
Instead, you can host applications using object storage.
Todayβs focus was learning Amazon S3, understanding Identity and Access Management, and interacting with AWS services using the command line.
And it completely changed how I see infrastructure.
π The Objective
Todayβs goal was simple but important:
- Understand how cloud storage works
- Learn user and permission management
- Deploy a static website without a server
- Interact with AWS using the CLI
This meant working with three important AWS components:
- Amazon S3
- AWS Identity and Access Management
- AWS Command Line Interface
π¦ Step 1: Understanding Amazon S3
First, I learned about Amazon S3 (Simple Storage Service).
It is an object storage service used to store files in the cloud.
Instead of folders and files like a normal system, S3 stores data as:
- Buckets β Containers for storage
- Objects β Files inside buckets
So the structure looks like:
Bucket
βββ index.html
βββ error.html
βββ images/
βββ css/
Buckets are globally unique and act like storage containers for applications, backups, media files, and static websites.
πͺ£ Step 2: Creating and Managing an S3 Bucket
After understanding the concept, I created my own S3 bucket.
Steps I followed:
- Opened the AWS Console
- Navigated to Amazon S3
- Created a new bucket
- Selected the Mumbai region
- Configured permissions
- Enabled Static Website Hosting
Inside the bucket, I uploaded:
- index.html
- error.html
- project assets
Once the files were uploaded, I enabled static website hosting and AWS generated a public website endpoint.
My website was now running directly from S3 storage.
No server required.
π Step 3: Hosting a Static Website using S3
Unlike my EC2 deployment, this approach had a big difference:
There was no Linux server, no Nginx, and no SSH.
Instead:
- The files live inside S3
- AWS automatically serves them over HTTP
- The bucket behaves like a web server
This showed me a very important cloud concept:
Not every application requires a virtual machine.
Sometimes storage + HTTP delivery is enough.
π Step 4: Learning IAM β Identity & Access Management
Next, I explored AWS Identity and Access Management.
IAM controls who can access AWS resources and what actions they can perform.
Instead of sharing the root account, best practice is to create IAM users with limited permissions.
Key IAM components I learned:
Users
Individual identities created for developers or services.
Groups
A collection of users sharing the same permissions.
Example:
Developers Group
βββ avinash-dev
βββ sagar-dev
Roles
Temporary access permissions used by services like EC2.
Policies
JSON documents that define permissions.
Example policy action:
s3:PutObject
s3:GetObject
s3:ListBucket
Identity Providers
Used for federated access like Google login or corporate SSO.
This structure makes AWS secure and scalable for teams.
π» Step 5: Installing and Configuring AWS CLI
Today I also started working with the AWS Command Line Interface.
Instead of using the AWS console, CLI allows interaction with AWS services directly from the terminal.
First I installed the AWS CLI and configured credentials:
aws configure
It required:
- Access Key
- Secret Key
- Region
- Output format
After configuration, my terminal could interact with AWS services.
π Step 6: Managing S3 Using AWS CLI
Using the CLI, I practiced several commands.
List buckets
aws s3 ls
Upload a file
aws s3 cp index.html s3://cloudcanvas-editor-h-bucket/
Upload error page
aws s3 cp error.html s3://cloudcanvas-editor-h-bucket/
These commands allowed me to manage cloud storage directly from my terminal.
It felt similar to using Linux commands, but now I was interacting with cloud infrastructure.
π§ Key Technical Takeaways
Todayβs learning helped me understand several important cloud concepts:
- How Amazon S3 stores objects instead of traditional filesystems
- Difference between server-based hosting and serverless hosting
- Why IAM is critical for security
- How policies control resource permissions
- How the AWS Command Line Interface enables automation
- How to upload and manage files in S3 from the terminal
Most importantly, I realized something powerful:
Cloud engineering is not only about servers.
It is about choosing the right service for the right problem.
π― Reflection
Just two days ago, my applications were running on localhost.
In my previous article, I deployed one on Amazon EC2 using Nginx.
Today, I hosted one without any server using Amazon S3.
That contrast helped me understand something important about cloud architecture:
Infrastructure choices matter.
Sometimes you scale with servers.
Sometimes you scale without them.
And understanding both approaches is what makes cloud engineering powerful.
This is Day 4 of my cloud journey.
More learning ahead π
Top comments (0)