DEV Community

Cover image for Introduction to Amazon S3
Sajjad Rahman
Sajjad Rahman

Posted on

Introduction to Amazon S3

Introduction to Amazon S3 – The Backbone of Cloud Storage

Every day, organizations generate massive amounts of data — from user activity logs, application files, and backups to videos and images. Managing this data securely and cost-effectively is a big challenge.

This is where Amazon Simple Storage Service (Amazon S3) comes in. It’s one of the most popular cloud storage services in the world and a core building block for anyone preparing for AWS Data Engineering certifications.

In this blog, we’ll break down what S3 is, why it’s important, and how it works with simple examples.

🚩 Why Do We Need Amazon S3?

Before cloud storage, companies stored data on local hard drives or on-premise servers.

  • Limited capacity (1TB–2TB hard disks at most for personal PCs).
  • High costs for scaling.
  • Hard to share or access remotely.
  • Data recovery was slow in case of failure.

To visualize this, think about your personal Google Drive:

  • You get free 15GB.
  • If you need more, you buy fixed storage plans (100GB, 1TB, etc.).
  • You can only access it if you have an internet connection.

Amazon S3 works on the same principle — but without storage limits. You can store any amount of data (structured or unstructured) and pay only for what you actually use.

That’s why S3 is the default choice for businesses running applications in the cloud.

📌 Common Use Cases of Amazon S3

Amazon S3 is widely used across industries. Some common scenarios include:

  • 🔹 Backup & Storage
  • 🔹 Disaster Recovery
  • 🔹 Archiving old data
  • 🔹 Hybrid Cloud Storage
  • 🔹 Hosting applications and static websites
  • 🔹 Media hosting (images, audio, video)
  • 🔹 Building Data Lakes for analytics
  • 🔹 Software delivery (updates, patches)

If you’re aiming for AWS Data Engineering certification, remember: S3 often serves as the “data lake” where raw and processed data lives.

🪣 Amazon S3 Buckets

In Amazon S3, your files are stored inside buckets (think of them like containers or folders).

Key rules about buckets:

  • Bucket names must be globally unique across all AWS accounts.
  • Buckets exist at the region level (you choose where your data lives).
  • Naming rules:

    • No uppercase letters, no underscores.
    • Length: 3–63 characters.
    • Must start with a lowercase letter or number.
    • Must not start with xn-- or end with -s3alias.

👉 Example:
s3://my-first-bucket-123

📂 Amazon S3 Objects

Objects are the actual files you upload into a bucket.

Each object has a key, which is the full path to the file.

Example:

s3://my-bucket/file1.txt
s3://my-bucket/folder1/folder2/data.csv
Enter fullscreen mode Exit fullscreen mode

Here:

  • Prefixfolder1/folder2/
  • Object namedata.csv

Other key points:

  • Maximum object size is 5TB. If larger, you must use multipart upload.
  • Objects can have metadata (extra information like file type or tags).
  • You can add tags (up to 10 per object) for organization, billing, or lifecycle policies.
  • If versioning is enabled, each update to the file gets a unique Version ID.

⚡ Quick Demo (AWS CLI)

Let’s create a bucket and upload a file using the AWS CLI:

# Create a bucket
aws s3 mb s3://my-first-bucket-123 --region us-east-1

# Upload a file
aws s3 cp hello.txt s3://my-first-bucket-123/

# List bucket contents
aws s3 ls s3://my-first-bucket-123/
Enter fullscreen mode Exit fullscreen mode

Top comments (0)