What is S3
Amazon Simple Storage Service (S3) is an object storage service that provides a secure, durable, and scalable way to store and access data in the cloud.
It’s perfect for storing text files, images, videos, logs, backups, and more!
Prerequisites
- An AWS account with permission to create EC2, IAM, S3, and AMIs/snapshots.
- A key pair for SSH (or ability to create one while launching an instance).
- Basic shell on your local machine (macOS / Linux / WSL) or use the EC2 instance shell.
- (Optional) AWS CLI installed locally for doing S3 commands from your laptop.
1.0 Create an IAM role for EC2 (recommended)
- Console: IAM > Roles > Create role
- Select AWS service → EC2, click Next.
- Attach policy: either AmazonS3ReadOnlyAccess (quick) or a minimal custom policy (see snippet below) if you want read/write limited to one bucket.
- Name it e.g.
s3-access-role-day43
and finish.
Minimal custom policy (read/write to one bucket)
Replace your-bucket-name
with your bucket ARN.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Action":[
"s3:ListBucket"
],
"Resource":[
"arn:aws:s3:::your-bucket-name"
]
},
{
"Effect":"Allow",
"Action":[
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource":[
"arn:aws:s3:::your-bucket-name/*"
]
}
]
}
1.1 Launch an EC2 instance (Console)
- Console: EC2 > Instances > Launch instances.
- Choose an AMI (e.g., Amazon Linux 2).
- Choose instance type (e.g.,
t2.micro
for free tier). -
Configure Instance: pick the same Region you’ll create the S3 bucket in. Under IAM role select the role you created (
s3-access-role-day43
). - Configure Storage / Tags as desired.
- Security group: allow SSH (TCP 22) from your IP.
- Launch and choose/create a key pair (download
mykey.pem
).
1.2 Connect to EC2 via SSH
On your laptop:
chmod 400 mykey.pem
ssh -i mykey.pem ec2-user@<EC2-PUBLIC-IP>
# or ubuntu@... for Ubuntu AMIs
1.3 Create an S3 bucket (Console or CLI)
Console: S3 → Create bucket
- Bucket name must be globally unique, lowercase, no spaces (e.g.
debs-day43-demo-2025
). - Choose the same region as your EC2 instance.
- Leave Block Public Access ON unless you intentionally want it public.
- Create.
CLI (from your laptop):
aws s3 mb s3://your-unique-bucket-name --region us-east-1
1.4 Upload a file to S3 (Console or CLI)
Create a test file locally:
echo "hello from day43 $(date)" > testfile.txt
Upload from laptop (CLI):
aws s3 cp testfile.txt s3://your-unique-bucket-name/
aws s3 ls s3://your-unique-bucket-name/
Or upload in Console: open the bucket → Upload → add testfile.txt
→ Upload.
1.5 Access the file from the EC2 instance (using the instance role)
On the EC2 instance (no AWS credentials required if you attached the role):
- Check AWS CLI presence:
aws --version
If not installed on Amazon Linux 2:
sudo yum install -y aws-cli # or use the official awscli v2 installer if you prefer
- List the bucket:
aws s3 ls s3://your-unique-bucket-name/
- Download the file:
aws s3 cp s3://your-unique-bucket-name/testfile.txt ./
cat testfile.txt
If you didn’t use an instance role: run aws configure
and supply Access Key / Secret (less secure).
2.0 Create an AMI (creates snapshots in background)
Console:
- EC2 → Instances → select your instance → Actions → Image and templates → Create image.
- Give it a name, decide whether to No reboot (if you want consistent filesystem make reboot). Submit.
CLI (replace instance id):
aws ec2 create-image --instance-id i-0123456789abcdef0 --name "day43-ami" --no-reboot
# returns an AMI id (ami-xxxx)
Note: creating an AMI will create snapshots of attached EBS volumes.
2.1 Launch a new EC2 instance from the AMI
Console: EC2 → AMIs → select the AMI you created → Launch → configure security groups, key pair, and choose the same s3-access-role-day43
role (or attach it after launch).
CLI (example — many options required, supply subnet/security-group/key-name):
aws ec2 run-instances --image-id ami-0abcd1234efgh5678 --count 1 --instance-type t2.micro --key-name my-key --subnet-id subnet-abc123 --security-group-ids sg-0123456789
2.2 Connect to the new instance and download file from S3
SSH into the new instance and:
# ensure aws cli available
aws s3 cp s3://your-unique-bucket-name/testfile.txt ./
sha256sum testfile.txt
cat testfile.txt
2.3 Verify contents are identical
On the original instance (or your laptop) compute checksum:
sha256sum testfile.txt
# example output:
# d2d2... testfile.txt
On the new instance run the same sha256sum
— the hash should match.
Or, if you can copy both files to the same host:
diff original-testfile.txt new-testfile.txt || echo "files differ"
Tip: ETag from
aws s3api head-object
can sometimes be used as MD5 for single-part uploads, but multipart uploads produce multipart ETags — so checksums are the robust way.
Cleanup (avoid surprise charges)
- Delete S3 object and bucket:
aws s3 rm s3://your-unique-bucket-name/testfile.txt
aws s3 rb s3://your-unique-bucket-name --force
- Terminate EC2 instances (Console → Instances → Actions → Terminate).
- Deregister AMI and delete associated snapshots (AMI -> Actions -> Deregister; then delete snapshots).
- Delete IAM role if not needed.
Quick command summary
# create test file
echo "hello from day43 $(date)" > testfile.txt
# upload from laptop
aws s3 mb s3://your-unique-bucket-name --region us-east-1
aws s3 cp testfile.txt s3://your-unique-bucket-name/
# on EC2 (with instance role)
aws s3 ls s3://your-unique-bucket-name/
aws s3 cp s3://your-unique-bucket-name/testfile.txt ./
sha256sum testfile.txt
# create AMI (CLI)
aws ec2 create-image --instance-id i-0123456789abcdef0 --name "day43-ami" --no-reboot
# cleanup
aws s3 rm s3://your-unique-bucket-name/testfile.txt
aws s3 rb s3://your-unique-bucket-name --force
Top comments (0)