DEV Community

Aisalkyn Aidarova
Aisalkyn Aidarova

Posted on

S3

Amazon S3 Overview

Amazon Simple Storage Service (S3) is one of AWS’s core building blocks — an infinitely scalable object storage used by websites, applications, and AWS services.


Main Use Cases

S3 is extremely versatile. Common uses include:

  • Backup and storage – store files, disks, and data safely.
  • Disaster recovery – replicate data across regions.
  • Archival storage – use cheaper tiers like S3 Glacier for long-term storage.
  • Hybrid cloud storage – extend on-premises storage to the cloud.
  • Hosting applications/media – store images, videos, and web assets.
  • Data lakes and analytics – store raw data for big data processing.
  • Static website hosting – serve HTML/CSS/JS files directly.
  • Software distribution – deliver updates or packages globally.

Real-World Examples

  • NASDAQ stores 7 years of data in S3 Glacier.
  • Cisco runs analytics on S3-stored data.

Buckets

Buckets are top-level containers for storing objects (files).

  • Each bucket has a globally unique name (across all AWS accounts and regions).
  • Although S3 looks global, buckets exist within a single AWS region.

Bucket Naming Rules

  • No uppercase or underscores.
  • Length: 3–63 characters.
  • Must start with a lowercase letter or number.
  • Cannot resemble an IP address.
  • Use letters, numbers, and hyphens safely.

Objects and Keys

Objects are files stored in S3.
Each has a key — the full path (like a file’s name + folders).

Example:

my-folder1/another-folder/myfile.txt
Enter fullscreen mode Exit fullscreen mode
  • Prefix: my-folder1/another-folder/
  • Object name: myfile.txt

Note: S3 has no real directories — “folders” in the console are just key prefixes containing slashes (/).


Object Details

Each S3 object can include:

  • Value (data) – up to 5 TB per object.
  • Metadata – key/value pairs set by system or user.
  • Tags – up to 10 tags for organization, billing, or lifecycle rules.
  • Version ID – if bucket versioning is enabled.

Multipart Upload

  • Required for files larger than 5 GB.
  • Uploads the file in multiple parts (each ≤ 5 GB).
  • Used for better speed and reliability for large files.

Key Takeaways

  • S3 = Simple, Scalable, Secure storage.
  • Used for files, backups, data lakes, and websites.
  • Buckets store objects, which are defined by keys (paths).
  • No real folders — just prefixes in object keys.
  • Multipart upload is required for big files (>5 GB).

Creating an Amazon S3 Bucket

1. Open S3 and Create a Bucket

  • In the AWS Management Console, open Amazon S3 → click Create bucket.
  • Select your region (e.g. Europe (Stockholm) eu-north-1).

Buckets are regional resources, even though S3 itself looks global.

2. Bucket Type

  • Some regions show a Bucket type option:

    • Choose General purpose (recommended).
    • Ignore Directory buckets — not needed for general use or exams.
  • If the option doesn’t appear, it’s automatically General purpose.

3. Bucket Name

  • Bucket names must be globally unique across all AWS accounts and regions.
  • Example: aisalkyn-demo-s3-v1.
  • If the name already exists, AWS shows an error.

4. Default Settings

Keep most defaults for now:

  • Object Ownership: ACLs disabled (recommended for security).
  • Block Public Access: Enabled (keeps bucket private).
  • Bucket Versioning: Disabled for now — can be turned on later.
  • Tags: none.
  • Default Encryption: Enabled → choose S3 managed key (SSE-S3)Enable bucket key.

Click Create bucket → success message appears.


Viewing Buckets

  • You’ll see all buckets (from all regions) listed in the S3 console.
  • Use the search bar to find your bucket by name.

Uploading an Object

1. Open the Bucket

  • Click your new bucket → UploadAdd files.
  • Select a file, e.g. coffee.jpg (~100 KB).
  • Confirm the destination (your bucket name) → Upload.

2. View the Uploaded Object

  • The object appears under the Objects tab.
  • Click it to see:

    • Properties: size, type, date uploaded.
    • Object URL (public URL).

Accessing the Object

1. Open (Authenticated URL)

  • When you click Open, S3 shows your image (coffee.jpg). This works because you are authenticated in AWS.

2. Public URL (Access Denied)

  • Copy the Object URL and open it in a browser — you’ll get Access Denied.

Because the object and bucket are private (public access is blocked).

3. Pre-Signed URL Explanation

  • The working URL you saw earlier is a pre-signed URL — a temporary link that includes your credentials/signature.
  • It proves to S3 that you are authorized to view the file.
  • Others cannot use it; it expires after a limited time.

Creating and Managing Folders

1. Create a Folder

  • In your bucket, click Create folder → name it imagesCreate folder.
  • Upload another file, e.g. beach.jpg, inside this folder.

2. Folder View

  • Navigate back — you’ll see the images folder under your bucket. It looks like folders in Google Drive or Dropbox, but remember: S3 has no real folders, only key prefixes.

3. Delete a Folder

  • Select the folder → Delete → type permanently delete → confirm.
  • The folder and its contents are removed.

Key Learnings

✅ S3 buckets are regional but names are globally unique.
✅ Objects are private by default — public URLs need permissions.
✅ A pre-signed URL grants temporary authenticated access.
✅ Folders are visual representations of key prefixes.
✅ Encryption (SSE-S3) protects data at rest automatically.

Amazon S3 Security Overview

S3 security ensures that only authorized users or systems can access your data.
There are two main types of security mechanisms:

  1. User-based security (IAM Policies)
  2. Resource-based security (Bucket Policies and ACLs)

1. User-Based Security

  • Managed through IAM Policies.
  • Defines which API calls a user, group, or role can perform.
  • Attached to IAM principals (users, roles).
  • Example actions: s3:GetObject, s3:ListBucket, etc.

Use case:
Grant S3 access to users or EC2 instances in your own account via IAM policies.


2. Resource-Based Security

Resource-based policies are attached directly to S3 resources (buckets or objects).

a) S3 Bucket Policies

  • Written in JSON format.
  • Control access at the bucket level.
  • Can:

    • Allow or deny access to specific users or accounts.
    • Allow cross-account access.
    • Make a bucket public.
    • Enforce encryption for all uploads.

Structure of a Bucket Policy

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::example-bucket/*"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode
  • Effect – Allow or Deny
  • Principal – Who (user, account, or service)
  • Action – Which API calls (e.g., s3:GetObject)
  • Resource – Which bucket or object (arn:aws:s3:::bucket-name/*)

This example makes all objects publicly readable.


3. Access Control Lists (ACLs)

  • Object ACLs: fine-grained permissions for individual objects.
  • Bucket ACLs: permissions for the bucket itself (rarely used).
  • ACLs can be disabled (recommended) to simplify management.
  • Use IAM Policies or Bucket Policies instead.

4. Access Decision Logic

An IAM principal can access an S3 object only if:

  1. The IAM policy or bucket policy explicitly allows it, and
  2. There is no explicit deny blocking the action.

5. S3 Encryption (Optional Layer)

Objects can be encrypted for data protection at rest:

  • SSE-S3 (default) – Managed by AWS.
  • SSE-KMS – Uses AWS KMS for more control and auditability.
  • SSE-C – Customer provides the encryption key.

6. Common Access Scenarios

Scenario Mechanism Used Explanation
IAM user in same account IAM Policy Grants direct access to S3
EC2 instance needs access IAM Role Attach a role with S3 permissions
User from another AWS account Bucket Policy Enables Cross-Account Access
Website or public access Bucket Policy Makes objects publicly readable

7. Block Public Access (Account and Bucket Level)

AWS introduced Block Public Access settings as a safeguard.

  • Found under Permissions → Block public access.
  • Overrides all public permissions if enabled.
  • Prevents accidental exposure of company data.
  • Can be applied:

    • Per bucket
    • Account-wide

Recommendation:
Keep it enabled unless you intentionally need public access (e.g., for a static website).


8. Summary of S3 Security Layers

Security Type Applies To Managed By Typical Use
IAM Policy Users, Roles IAM User-based access within account
Bucket Policy Buckets S3 Console / JSON Cross-account or public access
Object ACL Objects S3 Fine-grained legacy control
Encryption Objects S3/KMS Protects data at rest
Block Public Access Buckets / Accounts S3 Prevents data leaks

Key Takeaways

  • Use IAM Policies for users and roles.
  • Use Bucket Policies for cross-account or public access.
  • Avoid ACLs unless absolutely needed.
  • Keep Block Public Access enabled for safety.
  • Always consider encryption for sensitive data.

Amazon S3 – Static Website Hosting

Amazon S3 can host static websites, meaning websites made of fixed files such as HTML, CSS, JavaScript, images, and other static content (no backend code like PHP or Node.js).


1. How It Works

  • You create a bucket in S3.
  • Upload your website files (e.g., index.html, style.css, images/).
  • Enable Static Website Hosting in the bucket’s Properties tab.
  • Your site becomes accessible via an S3 website endpoint URL.

Example URLs:

http://bucket-name.s3-website-us-east-1.amazonaws.com
http://bucket-name.s3-website.eu-north-1.amazonaws.com
Enter fullscreen mode Exit fullscreen mode

or sometimes with a dash:

http://bucket-name.s3-website-us-east-1.amazonaws.com
http://bucket-name.s3-website.us-east-1.amazonaws.com
Enter fullscreen mode Exit fullscreen mode

(The format varies slightly depending on the region.)


2. Requirements

  • The bucket must have public read access, or you’ll get a 403 Forbidden error.
  • The bucket name must match the domain (if using a custom domain). Example: If your domain is example.com, the bucket name must also be example.com.

3. Making the Website Public

To make your site accessible on the internet:

  • Disable “Block all public access” for the bucket.
  • Add a Bucket Policy that grants public read access:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::your-bucket-name/*"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

Without this, the browser will show:

403 Forbidden
Enter fullscreen mode Exit fullscreen mode

4. Steps Summary

  1. Create an S3 bucket (name matches domain if using one).
  2. Upload website files (index.html, error.html, etc.).
  3. Enable Static Website Hosting:
  • Specify Index document: index.html
  • Optional Error document: error.html
    1. Make bucket public with a bucket policy.
    2. Test your website using the S3 endpoint URL.

5. Optional: Use Custom Domain

  • You can use Amazon Route 53 (or any DNS provider) to point your domain to the S3 website endpoint using a CNAME or Alias record.
  • For HTTPS support, use Amazon CloudFront in front of S3.

Key Takeaways

✅ S3 can host static websites directly, no servers needed.
✅ You must enable public access for the site to load.
Bucket Policy controls public read permissions.
✅ Website endpoints differ slightly by region format.
✅ For custom domains and HTTPS, integrate with Route 53 and CloudFront.

Hands-On: Enable an Amazon S3 Bucket for Static Website Hosting

1. Upload Website Files

  1. Open your S3 bucket in the AWS console.
  2. Click Upload → Add files → upload another image (beach.jpg).
  • Now you should have at least:

    • coffee.jpg
    • beach.jpg

2. Enable Static Website Hosting

  1. Go to the Properties tab of your bucket.
  2. Scroll all the way down to Static website hosting → click Edit.
  3. Choose Enable.
  4. Select Host a static website.
  5. Enter:
  • Index documentindex.html
  • (Optional) Error documenterror.html
    1. Click Save changes.

⚠️ AWS reminds you:
“Make sure all content is publicly readable.”
You already did this using a Bucket Policy in the previous lecture.


3. Upload the Website Homepage

  1. Go back to the Objects tab.
  2. Click Upload → Add files → index.htmlUpload.
  • This file is your homepage (e.g., “I love coffee. Hello World!”).
    1. Once uploaded, you should now see three objects:
   index.html
   coffee.jpg
   beach.jpg
Enter fullscreen mode Exit fullscreen mode

4. Test the Static Website

  1. Go to Properties → Static website hosting again.
  2. Copy the Bucket website endpoint URL (e.g.):
   http://your-bucket-name.s3-website-us-east-1.amazonaws.com
Enter fullscreen mode Exit fullscreen mode
  1. Paste it into your browser → You should see:
   I love coffee. Hello world!
Enter fullscreen mode Exit fullscreen mode

(This content comes from your index.html.)


5. Verify Image URLs

  • Right-click coffee.jpgOpen in new tab → the image loads.
  • Replace the filename in the URL to beach.jpg → both are accessible.

✅ Both images are publicly viewable because your bucket is public and static hosting is enabled.


6. Result

You have successfully:

  • Created an S3 bucket.
  • Uploaded HTML and image files.
  • Enabled static website hosting.
  • Verified that files are accessible over the internet.

7. Next Steps (Optional Enhancements)

  • Add error.html for 404 pages.
  • Configure Route 53 for a custom domain (e.g., www.mywebsite.com).
  • Add CloudFront for HTTPS and caching.
  • Apply lifecycle policies to manage old files.

Amazon S3 Versioning

Versioning in Amazon S3 allows you to keep multiple versions of an object in the same bucket — ensuring data protection, recoverability, and safe updates.


1. What Is S3 Versioning?

  • Versioning is a bucket-level feature that stores every version of an object under the same key (filename).
  • When versioning is enabled, every time you upload a new file with the same name, S3 creates a new version, rather than overwriting the existing one.

Example:

Upload Action Object Key Version ID
First upload index.html v1
Re-upload (overwrite) index.html v2
Re-upload again index.html v3

You can later access, restore, or permanently delete any of these versions.


2. Why Enable Versioning?

Protection Against Accidental Deletion

  • If a file is deleted, S3 adds a delete marker instead of removing the file permanently.
  • You can remove the delete marker to restore the file.

Recovery from Overwrites

  • You can roll back to an earlier version of a file if a newer upload overwrote it.

Audit and Change History

  • Keeps track of all object changes, useful for compliance and audits.

Disaster Recovery

  • Restores lost or corrupted data easily.

3. Important Notes

  • Versioning is off by default.
    You must explicitly enable it on the bucket.

  • Existing objects before enabling versioning get a special Version ID = null.
    Any future uploads receive unique version IDs.

  • Suspending versioning:

    • Does not delete previous versions.
    • New uploads after suspension will no longer create versions, but older versions remain retrievable.
  • Costs:

    • Each version counts as a separate stored object.
    • Storing multiple versions increases S3 storage costs.

4. How to Enable Versioning (Console)

  1. Open your S3 bucketProperties tab.
  2. Scroll to Bucket Versioning → click Edit.
  3. Select EnableSave changes.

Now every object in that bucket will maintain versions automatically.


5. How It Works (Example)

  1. Upload index.html → Version ID = v1.
  2. Upload the same index.html again → Version ID = v2.
  3. Upload once more → Version ID = v3.
  4. Delete the file → a delete marker is added.
  5. To restore, remove the delete markerv3 reappears.

6. CLI Equivalent

Enable versioning using AWS CLI:

aws s3api put-bucket-versioning \
  --bucket my-demo-bucket \
  --versioning-configuration Status=Enabled
Enter fullscreen mode Exit fullscreen mode

Check versioning status:

aws s3api get-bucket-versioning --bucket my-demo-bucket
Enter fullscreen mode Exit fullscreen mode

List versions:

aws s3api list-object-versions --bucket my-demo-bucket
Enter fullscreen mode Exit fullscreen mode

7. Key Takeaways

Feature Description
Enabled at bucket level Affects all objects in that bucket
Protects from accidental deletion Restorable via delete marker removal
Keeps full history Rollback possible anytime
Can be suspended safely Old versions stay intact
Increases storage use Each version billed separately

Hands-On: Playing with Amazon S3 Versioning

Versioning lets you safely update and restore files without losing old data.


1. Enable Versioning

  1. Go to your S3 bucket → Properties tab.
  2. Scroll to Bucket Versioning → click Edit.
  3. Select Enable → click Save changes.

Now versioning is active — future uploads will generate unique version IDs.


2. Update Your Website

  1. Go to your bucket’s Properties → Static Website Hosting.

  2. Copy the Website Endpoint URL.

  • For example:

     http://your-bucket-name.s3-website-us-east-1.amazonaws.com
    
  1. Open it — you’ll see your original page:
   I love coffee.
Enter fullscreen mode Exit fullscreen mode
  1. Edit your local index.html file and change the text to:
   I REALLY love coffee.
Enter fullscreen mode Exit fullscreen mode
  1. Save it and upload this updated index.html back into the same S3 bucket.

Result:

  • Page updated → Refresh the site → You now see:
  I REALLY love coffee.
Enter fullscreen mode Exit fullscreen mode

3. Viewing Versions

  1. In your bucket’s Objects tab, toggle Show versions (top-right switch).
  2. Observe:
  • coffee.jpg and beach.jpg show Version ID = null (uploaded before versioning).
  • index.html now shows two versions:

    • Version ID = null → the first upload.
    • A new unique version ID → the updated upload.

S3 keeps both versions of the same file.


4. Roll Back to Previous Version

Suppose you want to restore the old text (“I love coffee.”):

  1. With Show versions enabled, select the latest version ID of index.html.
  2. Click Delete → type permanently delete → confirm.

✅ This is a permanent delete — it removes that version only.

  1. Refresh your site → you’ll see the previous version again:
I love coffee.
Enter fullscreen mode Exit fullscreen mode

5. Deleting Objects (Using Delete Markers)

Now let’s see how deletes behave when versioning is enabled:

  1. Disable Show versions (normal view).
  2. Select coffee.jpgDelete (no “permanent delete” text this time).
  3. Confirm the delete.

Result:

  • The file seems gone in the console.
  • But if you enable Show versions again:

    • You’ll see a Delete Marker (a new “version” marking it deleted).
    • The actual previous version of coffee.jpg still exists beneath it.

6. Restoring a Deleted File

  1. Select the Delete Marker for coffee.jpg.
  2. Click Delete → confirm with “permanently delete.”

✅ This removes the delete marker only — not the actual file.

  1. Refresh your browser → The image (coffee.jpg) reappears on your website.

7. Key Concepts Illustrated

Action Result
Upload same file twice Creates new version
Delete file (normal delete) Adds delete marker
Delete specific version Permanently removes that version
Delete delete marker Restores the file
Disable versioning Stops creating new versions but keeps existing ones

8. Summary

  • Versioning protects data from accidental overwrites and deletes.
  • Every upload = new version.
  • Deletion adds a delete marker, not a real delete.
  • You can restore, roll back, or permanently remove versions anytime.
  • Old, pre-versioned files have Version ID = null.

Amazon S3 Replication Overview

Amazon S3 supports automatic, asynchronous replication of objects between buckets — ensuring your data is copied and synchronized for redundancy, compliance, or testing purposes.

There are two types of replication:

  1. CRR (Cross-Region Replication) → Between different AWS regions
  2. SRR (Same-Region Replication) → Between buckets in the same region

1. How S3 Replication Works

  • Replication is asynchronous — it happens in the background.
  • When you upload or update an object in the source bucket, S3 automatically copies it to the destination bucket.
  • Both the source and destination buckets must have Versioning enabled.
  • S3 uses IAM permissions to allow the replication process to read from the source and write to the destination.

2. Replication Types

Type Full Name Source & Destination Common Use Cases
CRR Cross-Region Replication Different AWS Regions - Compliance (data redundancy across geographies)
- Disaster recovery
- Reduced latency for global users
- Cross-account data sharing
SRR Same-Region Replication Same AWS Region - Log aggregation
- Real-time data sync between environments (Prod → Test)
- Backup within same region

3. Key Requirements

  1. Versioning must be enabled on both source and destination buckets.
  2. The replication role (IAM Role) must grant S3 permission to:
  • Read objects from the source bucket.
  • Write objects into the destination bucket.
    1. The replication configuration is defined in the source bucket.
    2. Replication happens only for new objects — existing objects are not copied automatically unless explicitly requested (via batch operations).

4. Important Details

  • Replication is one-way only (source → destination).
  • It’s asynchronous, meaning replication might take a short delay.
  • Replication also applies to:

    • Metadata and tags
    • Object ACLs (if you enable ACL replication)
    • Encryption settings (depends on SSE type)
  • Deletes are not replicated by default, but you can enable delete marker replication if needed.


5. Example Scenarios

Cross-Region Replication (CRR):

  • Region A: my-source-bucket (US-East-1)
  • Region B: my-backup-bucket (EU-West-1) → Every new upload in Region A is automatically copied to Region B.

Same-Region Replication (SRR):

  • Region: us-east-1
  • Source: prod-logs-bucket
  • Destination: analytics-logs-bucket → Automatically replicates logs for analysis or backup in the same region.

6. Benefits of Replication

Data protection & compliance (multiple region backups)
Business continuity (DR-ready setup)
Faster local access for distributed teams
Data segregation for testing, auditing, or analysis
Automatic background operation (no manual scripts needed)


7. CLI Example (for reference)

Enable replication (simplified example):

aws s3api put-bucket-replication \
  --bucket my-source-bucket \
  --replication-configuration '{
    "Role": "arn:aws:iam::123456789012:role/s3-replication-role",
    "Rules": [{
      "Status": "Enabled",
      "Prefix": "",
      "Destination": {
        "Bucket": "arn:aws:s3:::my-destination-bucket"
      }
    }]
  }'
Enter fullscreen mode Exit fullscreen mode

Key Takeaways

Feature CRR SRR
Regions Different Same
Versioning Required
IAM Role Required
Asynchronous
Common Use Case Disaster recovery, compliance Log aggregation, testing
Cross-account Support

Amazon S3 Replication – Important Notes

After enabling S3 Replication (CRR or SRR), keep the following behavior and limitations in mind:


1. Replication Applies Only to New Objects

  • Only new uploads after enabling replication are automatically replicated.
  • Existing objects in the source bucket are not replicated automatically.

Solution:
Use S3 Batch Replication to:

  • Replicate existing objects that were uploaded before replication was enabled.
  • Retry failed replication events.

2. Handling Deletes

Replication treats delete actions carefully to avoid unintended data loss.

Type of Delete Replicated? Explanation
Delete marker (soft delete) ✅ Optional Can be replicated if you enable this setting.
Permanent delete (specific version ID) ❌ Not replicated To prevent malicious or accidental deletes from propagating.

In other words:
If someone permanently deletes a version in the source, it will not delete it in the destination.


3. No Replication Chaining

Replication does not cascade automatically:

  • If Bucket A → Bucket B replication is enabled, and Bucket B → Bucket C replication is also enabled, then objects uploaded to Bucket A are not replicated to Bucket C.

Replication works only between the directly linked source and destination buckets.


4. Summary Table

Feature / Setting Behavior
Default replication Only new objects
Existing object sync Use Batch Replication
Delete markers Optional replication
Permanent deletes Not replicated
Replication chaining Not supported
Failed replications Can be retried using Batch Replication

5. Key Takeaways

  • Batch Replication fills the gap for historical and failed objects.
  • Delete marker replication is optional — handle with caution.
  • Permanent deletions are never propagated (security feature).
  • Replication paths are one-to-one, not transitive.

Hands-On: Practicing Amazon S3 Replication

Objective

Set up Cross-Region Replication (CRR) or Same-Region Replication (SRR) between two S3 buckets and verify that new objects and delete markers replicate automatically.


1. Create the Source (Origin) Bucket

  1. Open the Amazon S3 Console → click Create bucket.
  2. Name it, for example:
   s3-stephane-bucket-origin-v2
Enter fullscreen mode Exit fullscreen mode
  1. Choose a region (e.g., eu-west-1).
  2. Scroll down → Enable bucket versioning → click Create bucket.
  • Versioning is required for replication.

2. Create the Destination (Replica) Bucket

  1. Create another bucket named:
   s3-stephane-bucket-replica-v2
Enter fullscreen mode Exit fullscreen mode
  1. Choose:
  • Same region → for SRR
  • Different region (e.g., us-east-1) → for CRR
    1. Scroll down → Enable versioning → click Create bucket.

Result:
You now have two versioned buckets:

  • Source bucket (Origin): eu-west-1
  • Target bucket (Replica): us-east-1 (for CRR example)

3. Upload an Initial File (Before Replication)

  • In the origin bucket, upload beach.jpg.

    • This file will not replicate yet, because replication rules are not configured.
    • Replication only affects new objects after setup.

4. Create a Replication Rule

  1. Go to the origin bucket → Management tab.

  2. Scroll to Replication rules → click Create replication rule.

  3. Configure:

  • Name: DemoReplicationRule
  • Status: Enabled
  • Scope: All objects in the bucket
  • Destination:

    • Select “A bucket in this account”
    • Paste your replica bucket name (s3-stephane-bucket-replica-v2)
    • AWS automatically detects the destination region (e.g., us-east-1)
  1. IAM Role: Choose “Create a new role” (S3 will generate one for replication).

  2. When prompted:

  • “Do you want to replicate existing objects?” → No (existing files will not be copied automatically).
  1. Click Save.

Replication rule created and ready.


5. Test Replication

  1. In the origin bucket, upload a new file, e.g., coffee.jpg.
  2. Wait ~10 seconds.
  3. Open the replica bucket → refresh → you’ll see coffee.jpg appear automatically.
  4. Enable Show versions in both buckets:
  • Version IDs match exactly between origin and replica.

6. Verify Cross-Region Replication Works

  • Upload another version of an existing file, e.g., beach.jpg.
  • The new version (e.g., version ID DK2) appears in the origin bucket.
  • Within a few seconds, that same version appears in the replica bucket.

    • This confirms version-level replication.

7. Test Delete Marker Replication

  1. In the origin bucket, go to Management → Edit replication rule.
  2. Scroll down → Enable Delete marker replication → Save changes.

Now test it:

  1. In the origin bucket, delete coffee.jpg.
  • A delete marker is added (since the bucket is versioned).

    1. Wait a few seconds → refresh the replica bucket.
  • The delete marker is now replicated.

  • When “Show versions” is off, coffee.jpg disappears in both buckets.

  • When “Show versions” is on, you can still see all versions and the delete marker.


8. Test Permanent Delete Behavior

  1. In the origin bucket, delete a specific version ID of beach.jpg.
  • This is a permanent delete.

    1. Check the replica bucket.
  • The delete does not propagate — the file remains intact.

Reason:
Permanent deletes are never replicated to prevent data loss from accidental or malicious deletions.


9. Summary of Replication Behavior

Action Replicated? Notes
New uploads ✅ Yes Automatic background copy
Existing objects ❌ No Use Batch Replication
Object version updates ✅ Yes Version IDs preserved
Delete marker ✅ Optional Must enable this feature
Permanent delete ❌ No Prevents malicious deletions
Metadata, tags ✅ Yes Included if enabled
Chained replication (A→B→C) ❌ No One-to-one only

10. Key Takeaways

  • Replication = Versioned + IAM Role + Asynchronous copy.
  • Use CRR for disaster recovery and compliance.
  • Use SRR for logs, testing, and intra-region duplication.
  • Batch Replication handles pre-existing or failed copies.
  • Delete marker replication is optional; permanent deletes are never replicated.

Amazon S3 Storage Classes Overview

Amazon S3 provides different storage classes to balance cost, availability, and retrieval speed depending on data access frequency and business needs.

You can:

  • Choose a storage class when uploading an object.
  • Change it manually later.
  • Automate transitions with S3 Lifecycle policies or S3 Intelligent-Tiering.

1. Key Concepts

Durability

  • S3 durability is “11 nines” (99.999999999%) across all classes. → If you store 10 million objects, statistically you might lose one every 10,000 years.

Availability

  • Defines how often data can be accessed when needed.
  • Varies by storage class (e.g., 99.99% for Standard).

2. Amazon S3 Storage Classes

Storage Class Availability Minimum Storage Duration Use Case
S3 Standard (General Purpose) 99.99% None Frequently accessed data, big data analytics, mobile & gaming apps, content distribution
S3 Standard-IA (Infrequent Access) 99.9% 30 days Infrequently accessed data, needs rapid access when required — backups, DR
S3 One Zone-IA 99.5% 30 days Infrequent access data stored in a single AZ — recreatable data, secondary backups
S3 Glacier Instant Retrieval 99.9% 90 days Archival data needing instant (milliseconds) retrieval
S3 Glacier Flexible Retrieval 99.9% 90 days Archival data that can tolerate 1–12 hours retrieval (formerly “S3 Glacier”)
S3 Glacier Deep Archive 99.9% 180 days Long-term cold storage — lowest cost, retrieval in 12–48 hours
S3 Intelligent-Tiering 99.9% None Automatically moves data between access tiers based on usage patterns

3. Glacier Tiers Explained

Class Retrieval Time Typical Use
Glacier Instant Retrieval Milliseconds Quarterly-accessed backups
Glacier Flexible Retrieval Expedited: 1–5 min
Standard: 3–5 hrs
Bulk: 5–12 hrs
Archival data with flexible retrieval times
Glacier Deep Archive Standard: 12 hrs
Bulk: 48 hrs
Long-term archives (compliance, legal, historic data)

4. S3 Intelligent-Tiering (Smart Automation)

Automatically moves objects across tiers based on access frequency.
Requires a small monthly monitoring and automation fee but no retrieval cost.

Tiers

  • Frequent Access Tier: Default when object is uploaded.
  • Infrequent Access Tier: After 30 days of no access.
  • Archive Instant Access: After 90 days of no access (automatic).
  • Archive Access (optional): 90–700+ days.
  • Deep Archive Access (optional): 180–700+ days.

✅ Ideal for unpredictable access patterns — “Set and forget” storage optimization.


5. Cost vs Performance Summary

Storage Class Cost ($) Access Speed Durability Availability
S3 Standard High Instant 11 nines 99.99%
Standard-IA Medium Instant 11 nines 99.9%
One Zone-IA Lower Instant 11 nines 99.5%
Glacier Instant Very Low Milliseconds 11 nines 99.9%
Glacier Flexible Very Low Minutes–Hours 11 nines 99.9%
Glacier Deep Archive Lowest Hours–Days 11 nines 99.9%
Intelligent-Tiering Variable Instant 11 nines 99.9%

6. Lifecycle Transitions

You can define S3 Lifecycle Rules to automatically move objects to cheaper classes over time.
Example:

  • Day 0 → S3 Standard
  • Day 30 → S3 Standard-IA
  • Day 90 → S3 Glacier Instant Retrieval
  • Day 180 → S3 Glacier Deep Archive

7. Summary for Exam

  • Durability (11 nines) is same across all storage classes.
  • Availability and retrieval speed decrease as cost decreases.
  • Intelligent-Tiering = automatic cost optimization.
  • Glacier family = archival tiers (cold → colder → coldest).
  • IA tiers = lower cost for less-accessed data with retrieval fees.

1. Create a Bucket

  1. Open the S3 consoleCreate bucket.
  2. Name it for example s3-storage-classes-demos-2022.
  3. Choose any region → Create bucket.

2. Upload an Object

  1. Inside your new bucket, click Upload → Add files.
  2. Select coffee.jpg.
  3. Expand Properties → Storage class to view all available classes.

3. Review Available Storage Classes

Class Description / Use Case
S3 Standard Default tier for frequently accessed data.
S3 Intelligent-Tiering Auto-moves objects between tiers based on access patterns.
S3 Standard-IA For infrequently accessed data that still needs low-latency access.
S3 One Zone-IA Stored in a single AZ (cheaper, less resilient). Use for re-creatable data.
S3 Glacier Instant Retrieval Millisecond retrieval for cold data.
S3 Glacier Flexible Retrieval Retrieval in minutes to hours; archival storage.
S3 Glacier Deep Archive Lowest cost, retrieval in 12–48 hours.
Reduced Redundancy Storage (RRS) Deprecated; no longer recommended.

(Each class shows number of AZs, minimum storage duration, and billing details.)


4. Choose a Storage Class and Upload

  • For example, choose Standard-IA and upload coffee.jpg.
  • After uploading, confirm in the Objects tab → the storage class column shows STANDARD_IA.

5. Change an Object’s Storage Class

  1. Select the object → Properties → Edit Storage class.
  2. Change to One Zone-IASave changes.
  3. Object is moved to One Zone-IA.
  4. Repeat to switch to Glacier Instant Retrieval or Intelligent-Tiering as desired.

✅ You can re-classify any object manually after upload.


6. Automate Class Transitions (Lifecycle Rules)

  1. Go back to the bucket → Management tab → Create lifecycle rule.
  2. Name it DemoRule.
  3. Apply to All objects in the bucket.
  4. Configure transitions for the current version of objects, for example:
Days after creation Transition to Storage Class
30 days Standard-IA
60 days Intelligent-Tiering
180 days Glacier Flexible Retrieval
  1. Review and Save rule.

✅ Lifecycle rules automatically move objects between classes to reduce cost without manual intervention.


7. Key Takeaways

  • You can manually assign or edit a storage class for any object.
  • Lifecycle rules enable automatic transitions based on object age.
  • Intelligent-Tiering is best for unknown access patterns.
  • Glacier tiers are for archival storage with retrieval delays.
  • Reduced Redundancy Storage is deprecated — avoid using it.

Here’s a structured summary of your Amazon S3 Express One Zone lecture — concise and exam-focused while still detailed enough for classroom or slide use:


Amazon S3 Express One Zone (High-Performance Storage Class)

1. Overview

S3 Express One Zone is a high-performance, low-latency storage class designed for data-intensive workloads that need extremely fast access speeds.
Unlike standard S3 buckets, this storage class uses directory buckets located in a single Availability Zone (AZ).


2. Architecture

  • Stored in one AZ only → not replicated across multiple zones.
  • Uses directory buckets instead of traditional buckets.
  • You explicitly choose the Availability Zone when creating it.
  • Delivers up to 10× faster performance than S3 Standard.
  • Costs about 50 % less than S3 Standard due to single-AZ design.

3. Performance & Availability

Metric S3 Standard S3 Express One Zone
Latency Milliseconds Single-digit milliseconds
Throughput High Ultra-high (hundreds of thousands of requests per second)
Availability 99.99 % Lower (single-AZ)
Durability 11 nines High, but within one AZ

4. Benefits

✅ Up to 10× performance improvement for read/write operations.
Lower cost (≈ 50 % cheaper) than S3 Standard.
Reduced network latency when co-located with compute.
Ideal for short-term, high-speed workloads.


5. Limitations

⚠️ Single AZ exposure: if that AZ fails, the data becomes unavailable or lost.
⚠️ Requires creating directory buckets (not the same as regular S3 buckets).
⚠️ Designed for specific workloads, not general storage.


6. Common Use Cases

  • AI / ML training pipelines (e.g., SageMaker, EMR, Glue, Athena).
  • Financial modeling and simulation.
  • Media transcoding & rendering.
  • High-Performance Computing (HPC).
  • Low-latency analytics and real-time data processing.

7. Summary

Feature S3 Express One Zone
Scope Single Availability Zone
Bucket Type Directory Bucket
Performance ~10× S3 Standard
Cost ~50 % lower than S3 Standard
Latency Single-digit ms
Availability Lower (Single AZ)
Durability High within AZ
Use Cases AI/ML, HPC, Analytics, Media Processing

Key takeaway:
S3 Express One Zone is the fastest and lowest-latency S3 option, built for high-performance, AZ-specific workloads, where speed and locality matter more than multi-AZ resilience.

Amazon S3 Object Transitions and Lifecycle Rules

Lifecycle rules let you automate how objects move between storage classes and how long they’re retained before deletion.


1. Object Transitions Between Storage Classes

Objects can move manually or automatically using lifecycle rules.

Common transition paths include:

Standard → Standard-IA → Intelligent-Tiering → One Zone-IA → Glacier Flexible Retrieval → Glacier Deep Archive
Enter fullscreen mode Exit fullscreen mode

Example choices:

  • Infrequently accessed data → move to Standard-IA.
  • Archival data → move to Glacier tiers or Deep Archive.

2. Lifecycle Rule Components

Each rule can contain:

a. Transition Actions

  • Move objects to another storage class after a set time.
  • Example:

    • Move to Standard-IA after 60 days.
    • Move to Glacier after 180 days.

b. Expiration Actions

  • Permanently delete objects after a defined period.
  • Examples:

    • Delete access logs after 365 days.
    • Delete old versions if versioning is enabled.
    • Delete incomplete multipart uploads older than 14 days.

c. Scope

  • Apply to the entire bucket or only to objects with:

    • A prefix (e.g., images/, logs/).
    • Specific tags (e.g., Department=Finance).

3. Example Scenarios

Scenario 1 – Website Images

  • Source images:

    • Stored in S3 Standard for 60 days.
    • Transition to Glacier after 60 days.
  • Thumbnails:

    • Stored in One Zone-IA (immediate but cheap access).
    • Expire after 60 days (since they can be re-created).

Scenario 2 – Deleted Objects Retention Policy

Requirement:

  • Deleted objects recoverable instantly for 30 days,
  • Then recoverable within 48 hours for 1 year.

✅ Design:

  1. Enable versioning → deleted objects get a delete marker.
  2. Lifecycle rule:
  • Transition non-current versions to Standard-IA after 30 days.
  • Transition those versions to Glacier Deep Archive after 365 days.

4. Determining Optimal Transition Times

Use Amazon S3 Storage Class Analytics:

  • Provides data on access patterns between S3 Standard and Standard-IA.
  • Generates a daily CSV report with recommendations.
  • Does not analyze One Zone-IA or Glacier tiers.
  • Results appear within 24–48 hours of activation.

5. Key Takeaways

Feature Purpose
Lifecycle Rules Automate object transitions and deletions
Transition Actions Move objects to cheaper tiers
Expiration Actions Delete data or old versions automatically
Prefixes & Tags Target specific object groups
S3 Analytics Recommend cost-efficient transition timings

6. Exam Tip

You don’t need to memorize exact durations, but you must:

  • Know which storage classes support transitions,
  • Understand when to use prefixes, tags, and versioning, and
  • Recognize that S3 Analytics helps optimize rules between Standard and IA classes.

Hands-On Lab: Creating and Configuring S3 Lifecycle Rules

Lifecycle rules automate object transitions, deletions, and cleanup operations inside your S3 buckets.


1. Navigate to Lifecycle Rules

  1. Open your S3 bucket in the AWS console.
  2. Go to the Management tab.
  3. Scroll to Lifecycle rules → click Create lifecycle rule.
  4. Name your rule (e.g., demo-rule).
  5. Apply the rule to all objects in the bucket and acknowledge the warning.

2. Available Lifecycle Rule Actions

AWS gives you five possible rule actions, covering both versioned and non-versioned objects:

Action Type Purpose
1️⃣ Move current versions between storage classes Transition the latest version of each object.
2️⃣ Move non-current versions between storage classes Transition older versions (if versioning is enabled).
3️⃣ Expire current versions Automatically delete live (current) objects after a set time.
4️⃣ Permanently delete non-current versions Fully remove old versions after a retention period.
5️⃣ Delete expired objects / delete markers / incomplete uploads Clean up unnecessary data (e.g., abandoned uploads or empty delete markers).

3. Configuring Transitions

Current Versions

  • Example configuration:

    • Move to Standard-IA → after 30 days
    • Move to Intelligent-Tiering → after 60 days
    • Move to Glacier Instant Retrieval → after 90 days
    • Move to Glacier Flexible Retrieval → after 180 days
    • Move to Glacier Deep Archive → after 365 days

Result: Current objects will automatically move down the storage hierarchy over time — saving cost while preserving data.


Non-Current Versions

  • Example configuration:

    • Move to Glacier Flexible Retrieval after 90 days
    • Move to Deep Archive after 365 days

Use Case: Keep older object versions cheaply for audits or rollback.


4. Configuring Expiration and Deletion

Setting Example Action
Expire current versions Delete after 700 days
Permanently delete non-current versions Delete after 700 days
Delete expired objects / incomplete multipart uploads Automatically clean up failed uploads older than 14 days

Use Case:
Keep your bucket tidy and prevent unused data from consuming storage.


5. Review Timeline Visualization

After configuring your actions:

  • AWS shows a timeline (a visual sequence) showing when transitions and deletions will occur for:

    • Current object versions
    • Non-current (old) versions
  • Review this to confirm the correct order and timing.

If everything looks good → click Create rule.


6. Rule Execution

  • The lifecycle rule operates in the background.
  • AWS automatically transitions or expires objects based on your configuration — no manual action required.

7. Summary

Lifecycle Function Purpose
Transition actions Move objects to cheaper storage classes automatically
Expiration actions Delete objects after a retention period
Non-current version management Control storage cost of versioned data
Cleanup Remove incomplete uploads and unused delete markers
Automation Reduces manual maintenance and optimizes cost over time

Key Takeaways

  • Lifecycle rules = cost management + data retention automation.
  • Works seamlessly with versioning and storage classes.
  • Use prefixes or tags for selective rule targeting.
  • Always verify the timeline view before saving changes.

Hands-On Extension: Configure Lifecycle Rules via AWS CLI

1. Prerequisites

  • AWS CLI installed and configured (aws configure)
  • Existing S3 bucket (e.g., s3-demo-lifecycle-lab)
  • IAM user or role with s3:PutLifecycleConfiguration permission

2. Create a Lifecycle Configuration JSON

Save the following as lifecycle.json:

{
  "Rules": [
    {
      "ID": "DemoRule",
      "Filter": { "Prefix": "" },
      "Status": "Enabled",
      "Transitions": [
        { "Days": 30, "StorageClass": "STANDARD_IA" },
        { "Days": 60, "StorageClass": "INTELLIGENT_TIERING" },
        { "Days": 90, "StorageClass": "GLACIER_IR" },
        { "Days": 180, "StorageClass": "GLACIER" },
        { "Days": 365, "StorageClass": "DEEP_ARCHIVE" }
      ],
      "Expiration": { "Days": 700 },
      "NoncurrentVersionTransitions": [
        { "NoncurrentDays": 90, "StorageClass": "GLACIER" },
        { "NoncurrentDays": 365, "StorageClass": "DEEP_ARCHIVE" }
      ],
      "NoncurrentVersionExpiration": { "NoncurrentDays": 700 },
      "AbortIncompleteMultipartUpload": { "DaysAfterInitiation": 14 }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

3. Apply the Lifecycle Configuration

aws s3api put-bucket-lifecycle-configuration \
  --bucket s3-demo-lifecycle-lab \
  --lifecycle-configuration file://lifecycle.json
Enter fullscreen mode Exit fullscreen mode

✅ This command attaches your lifecycle rules to the specified bucket.


4. Verify Configuration

aws s3api get-bucket-lifecycle-configuration \
  --bucket s3-demo-lifecycle-lab
Enter fullscreen mode Exit fullscreen mode

You should see the JSON output reflecting your rule.


5. Delete Lifecycle Configuration (Optional)

aws s3api delete-bucket-lifecycle \
  --bucket s3-demo-lifecycle-lab
Enter fullscreen mode Exit fullscreen mode

6. Key CLI Learning Points

Command Purpose
put-bucket-lifecycle-configuration Create or update lifecycle rules
get-bucket-lifecycle-configuration View existing rules
delete-bucket-lifecycle Remove rules from a bucket

Amazon S3 Event Notifications

Amazon S3 can automatically trigger events when certain actions occur in a bucket — allowing you to build event-driven architectures and automate workflows.


1. What Are S3 Events?

Events are actions that happen within a bucket, such as:

Event Type Example
Object created PutObject, PostObject, CopyObject, CompleteMultipartUpload
Object removed DeleteObject, DeleteMarkerCreated
Object restored From Glacier or Deep Archive
Replication When replication of an object completes or fails

2. Event Filtering

You can limit which events trigger notifications using filters:

  • Prefix filters – e.g., only trigger for images/ folder.
  • Suffix filters – e.g., only trigger for .jpg files.

✅ Example:
Trigger an event only for objects ending with .jpeg uploaded to a specific folder.


3. Event Notification Destinations

S3 Event Notifications can send event data to:

Destination Purpose / Behavior
SNS (Simple Notification Service) Publish events to multiple subscribers (fan-out pattern).
SQS (Simple Queue Service) Store events in a queue for reliable processing.
Lambda Automatically trigger a function for custom processing (e.g., image resizing).
Amazon EventBridge Route events to 18+ AWS services (e.g., Step Functions, Kinesis, Glue).

4. Example Use Case

Automatically generate image thumbnails:

  1. A user uploads a photo (.jpg) to your S3 bucket.
  2. The S3 event triggers a Lambda function.
  3. Lambda generates a thumbnail and saves it back to the same or another bucket.

5. Permissions: Resource Access Policies

For S3 to send events to another service, that target service must explicitly grant permission:

Destination Policy Required Purpose
SNS Topic SNS resource access policy Allows S3 to publish messages.
SQS Queue SQS resource access policy Allows S3 to send messages to the queue.
Lambda Function Lambda resource policy Allows S3 to invoke the function.

S3 does not use IAM roles for these notifications — instead, the target service defines a resource policy granting S3 permission.


6. Event Delivery Timing

  • Events are typically delivered within seconds, but may occasionally take up to a minute or more.
  • Each notification includes:

    • Bucket name
    • Event type
    • Object key
    • Time of event

7. EventBridge Integration

All S3 events are automatically sent to Amazon EventBridge, where you can:

  • Create EventBridge rules to forward events to multiple AWS services.
  • Use advanced filtering (metadata, size, key name, tags, etc.).
  • Send events to destinations like:

    • AWS Step Functions
    • Kinesis Data Streams / Firehose
    • Glue, Athena, or analytics tools
  • Archive and replay events.

  • Get more reliable delivery than standard S3 event notifications.


8. Key Differences: S3 Notifications vs. EventBridge

Feature S3 Event Notifications Amazon EventBridge
Destinations SNS, SQS, Lambda 18+ AWS services
Filtering Prefix/suffix only Metadata, tags, object size, etc.
Reliability Basic (best effort) High reliability with retries
Event history No archiving Supports event archiving & replay
Complexity Simple Advanced, multi-destination orchestration

9. Key Takeaways

  • S3 can react to events in near real-time.
  • Destinations: SNS, SQS, Lambda, and EventBridge.
  • Use resource access policies to authorize S3 → target communication.
  • EventBridge is more powerful for complex workflows and analytics.
  • Perfect for serverless automation, data pipelines, and real-time processing.

Here’s a clear, structured summary of your S3 Event Notifications Hands-On Lab, perfect for lecture notes or student guides:


Hands-On Lab: S3 Event Notifications with SQS

Event Notifications let you automatically react when objects are created, deleted, or restored in an S3 bucket — triggering downstream processing like thumbnail generation, logging, or message queuing.


1. Create an S3 Bucket

  1. Go to Amazon S3 → Create bucket.
  2. Name it (e.g., stephane-v3-events-notifications).
  3. Choose a region (e.g., Ireland).
  4. Leave defaults → Create bucket.

2. Set Up Event Notifications

  1. Open the bucket → Properties tab.
  2. Scroll to Event notifications.
  3. Two options appear:
  • Enable EventBridge → sends all S3 events to Amazon EventBridge.
  • Create event notification → configure a specific event (simpler).

We’ll use the Create event notification option.


3. Configure a New Event Notification

  1. Click Create event notification.
  2. Name it DemoEventNotification.
  3. (Optional) Add Prefix/Suffix filters (e.g., only .jpg files).
  4. Under Event types, select:
  • All object create events (s3:ObjectCreated:*).
  • (You could also include deletions or restores.)

4. Choose Destination: SQS Queue

S3 Event Notifications can target SNS, SQS, or Lambda.
Here we’ll connect to an SQS queue.


5. Create an SQS Queue

  1. Open Amazon SQS → Create queue.
  2. Name it DemoS3Notification.
  3. Keep default settings → Create queue.

6. Fix the Access Policy

S3 needs permission to send messages to this queue.

Test the problem first

If you attach the queue to the event without updating the policy, you’ll see:

“Unable to validate destination configuration”

This means S3 lacks permission to send to the queue.

Update the queue policy

  1. In the SQS queue → Permissions → Access policy → Edit.
  2. Use the AWS Policy Generator or paste a minimal example:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowS3SendMessage",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "SQS:SendMessage",
      "Resource": "arn:aws:sqs:eu-west-1:123456789012:DemoS3Notification"
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

(Replace the ARN with your own queue’s ARN.)

  1. Save the policy.

✅ In production, scope this down to the specific S3 bucket ARN instead of "*".


7. Attach the Queue to the Event

  1. Go back to your S3 bucket → Properties → Event notifications.
  2. Choose the queue (DemoS3Notification).
  3. Save.
  4. You should see a success message — S3 sends a test event to verify connectivity.

8. Verify in SQS

  1. Go to your SQS queue → Send and receive messages → Poll for messages.
  2. You’ll first see a test event.
  • Delete it to keep things clean.
    1. Upload an object to S3 (e.g., coffee.jpg).
    2. Poll again — you’ll see a new message.

9. Inspect the Message

Each message contains JSON event data.
Example excerpt:

{
  "Records": [
    {
      "eventName": "ObjectCreated:Put",
      "s3": {
        "bucket": { "name": "stephane-v3-events-notifications" },
        "object": { "key": "coffee.jpg" }
      }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

✅ Confirms S3 successfully sent an event to SQS.


10. Clean Up

  • Delete test messages from the queue.
  • Optionally disable or remove the event rule if no longer needed.

11. Key Takeaways

Concept Summary
Triggers React to object events (create, delete, restore, replicate).
Destinations SNS topics, SQS queues, Lambda functions, or EventBridge.
Permissions Target service must grant SQS:SendMessage, SNS:Publish, or lambda:InvokeFunction to S3.
EventBridge Optional advanced integration — filtering, multiple targets, event replay.
Latency Events usually delivered within seconds (can take up to ~1 min).

Final Summary

You can use S3 Event Notifications to build event-driven workflows:

  • Send messages to SQS for decoupled processing.
  • Push updates to SNS for fan-out distribution.
  • Trigger Lambda for real-time compute tasks.
  • Or forward everything to EventBridge for complex multi-service orchestration.

Amazon S3 Baseline Performance and Optimization

Amazon S3 is designed for massive scalability and high throughput, capable of handling thousands of requests per second with low latency.


1. Baseline Performance

Default Behavior

  • Automatically scales to very high request rates — no manual tuning required.
  • Typical latency: 100–200 milliseconds to retrieve the first byte (low latency for an object store).

Throughput Limits (Per Prefix)

Operation Type Default Performance Limit
PUT, COPY, POST, DELETE 3,500 requests per second per prefix
GET, HEAD 5,500 requests per second per prefix

Prefixes are independent — each prefix scales separately.


2. Understanding “Per Prefix”

A prefix is the part of the object key before the object name in the path.

Example:

bucket-name/folder1/sub1/file.txt
bucket-name/folder1/sub2/file.txt
Enter fullscreen mode Exit fullscreen mode
  • Prefix 1: folder1/sub1/
  • Prefix 2: folder1/sub2/

Each prefix supports 3,500 write ops/sec and 5,500 read ops/sec.
So by distributing files across multiple prefixes, you can scale linearly.

Example Calculation:
4 prefixes × 5,500 GETs/sec = 22,000 GET requests/sec total.


3. Upload Optimization — Multi-Part Upload

Use multi-part upload for large files:

When to Use Why
Recommended for > 100 MB Speeds up transfer through parallelism
Required for > 5 GB Mandatory per S3 API limit

How It Works

  1. Large file is split into smaller parts (e.g., 5–100 MB chunks).
  2. Each part is uploaded in parallel to S3.
  3. Once all parts are uploaded, S3 reassembles them into the full object.

Benefits:

  • Maximizes available bandwidth.
  • Increases reliability (retry only failed parts).
  • Compatible with S3 Transfer Acceleration.

4. Upload/Download Speed — S3 Transfer Acceleration

Purpose: Speed up long-distance data transfers to an S3 bucket.

How It Works

  1. Data is first uploaded to the nearest AWS edge location (e.g., in the user’s region).
  2. From there, AWS forwards it to the destination bucket over the AWS global private network (faster and more reliable than public internet).

Benefits

  • Up to 10× faster for cross-continent uploads.
  • Uses >200 global edge locations (CloudFront network).
  • Works with multi-part uploads.
  • Ideal for global teams or applications uploading large files.

Example:
Upload from USA → S3 in Australia:

  • US client → nearby edge location (fast)
  • Edge → S3 bucket in Australia via AWS backbone (accelerated)

5. Download Optimization — S3 Byte-Range Fetches

Byte-Range Fetch allows clients to download specific portions of an object.

Use Cases

  1. Parallel downloads: Retrieve different byte ranges simultaneously → speeds up large file downloads.
  2. Partial retrieval: Fetch only required parts (e.g., first 50 bytes for headers or metadata).
  3. Failure recovery: Retry failed byte ranges individually (improves reliability).

Example:

Range: bytes=0-4999      → first 5 KB
Range: bytes=5000-9999   → next 5 KB
Enter fullscreen mode Exit fullscreen mode

All can be downloaded in parallel, reassembled locally.


6. Best Practices for High Performance

Optimization Purpose / Benefit
Distribute keys across multiple prefixes Scales read/write operations
Use multi-part uploads for large files Speeds up uploads and increases reliability
Use S3 Transfer Acceleration Reduces latency for global transfers
Use byte-range fetches Speeds up downloads and improves error recovery
Leverage prefixes strategically Avoids performance bottlenecks on a single prefix

7. Key Numbers to Remember (for Exam)

Metric Value
PUT/COPY/POST/DELETE per prefix 3,500 req/sec
GET/HEAD per prefix 5,500 req/sec
Latency 100–200 ms (first byte)
Recommended multi-part threshold > 100 MB
Required multi-part threshold > 5 GB

Summary

  • S3 scales automatically for high-performance workloads.
  • Prefixes define concurrency — scale horizontally by distributing keys.
  • Multi-part upload & byte-range fetches maximize transfer efficiency.
  • Transfer Acceleration improves global performance using AWS’s edge network.

Together, these techniques allow S3 to deliver massive throughput, low latency, and reliable data transfer for any workload — from backups to analytics pipelines.

Amazon S3 Batch Operations

1. Overview

Amazon S3 Batch Operations allow you to perform bulk actions on large sets of existing objects — all through a single request.

It’s a managed service that handles retry logic, progress tracking, notifications, and report generation automatically — saving you from writing and maintaining custom scripts.


2. Key Concept

Each Batch Job consists of:

  1. A list of objects to process.
  2. The action to perform (e.g., copy, tag, encrypt).
  3. Optional parameters (like metadata or encryption settings).

3. Common Use Cases

Use Case Description
Modify metadata Update object metadata or properties in bulk.
Copy objects Bulk copy objects between buckets or accounts.
Encrypt existing objects Apply encryption to all previously unencrypted files. (Common AWS exam scenario!)
Change ACLs or tags Update object permissions or tagging structure.
Restore Glacier objects Initiate restore requests for many archived objects at once.
Invoke Lambda function Perform a custom operation on every object — e.g., data transformation, virus scanning, format conversion.

4. Why Use S3 Batch Instead of Scripting?

Built-in retry management — no need to handle failed objects manually.
Scales automatically to millions or billions of objects.
Tracks progress and completion status.
Generates detailed reports of processed and failed objects.
Can send notifications when jobs complete.
Integrates with IAM and CloudTrail for access control and auditing.


5. How to Create an S3 Batch Operation Job

Step 1 – Prepare the Object List

Use S3 Inventory to generate a report of all your bucket objects (with metadata, encryption status, tags, etc.).

Step 2 – Filter (Optional)

Query the S3 Inventory report using Amazon Athena to filter only the objects you want to target.

Example: find all unencrypted objects for a mass encryption job.

Step 3 – Define the Batch Job

In the S3 console or CLI:

  • Provide the manifest (object list file from Inventory).
  • Select an operation (e.g., Copy, Replace Tags, Invoke Lambda).
  • Specify optional parameters.
  • Choose an IAM role with the correct permissions.

Step 4 – Execute and Monitor

Once started:

  • The job runs asynchronously.
  • You can monitor status (Pending → Running → Complete).
  • AWS handles parallelization and retries automatically.

Step 5 – Review Reports

After completion:

  • Review success/failure reports in your target bucket.
  • Receive optional SNS notifications for job completion.

6. Architecture Overview

Workflow Summary:

S3 Inventory → (Query with Athena) → Filtered object list → S3 Batch Operations
                                  ↳ Perform: Encrypt, Copy, Tag, Restore, Invoke Lambda, etc.
Enter fullscreen mode Exit fullscreen mode

7. Example Exam Scenario

You discover that many existing S3 objects are not encrypted.
What’s the simplest way to encrypt them all at once?

Answer:
Use S3 Inventory to find unencrypted objects →
Create an S3 Batch Operations job →
Choose the “Copy with encryption” action to encrypt all files in bulk.


8. Key Takeaways

Feature Purpose
S3 Batch Operations Automates bulk actions across millions of S3 objects.
S3 Inventory Generates the source list of objects.
Athena Integration Filters and queries object lists before batch processing.
Lambda Integration Enables custom, serverless logic per object.
Reports & Notifications Track completion, errors, and job metrics easily.

Summary

S3 Batch Operations let you:

  • Run mass updates on billions of objects.
  • Handle complex tasks automatically without manual scripting.
  • Combine Inventory + Athena for precise targeting.
  • Integrate with Lambda for custom workflows.
  • Greatly simplify large-scale data management in S3.

Amazon S3 Storage Lens

1. Overview

S3 Storage Lens is a storage analytics and optimization service for Amazon S3 that provides:

  • Organization-wide visibility into your storage usage and activity.
  • Actionable insights to optimize cost, improve protection, and detect anomalies.

You can aggregate metrics at Organization, Account, Region, Bucket, or Prefix level.


2. Key Features

Feature Description
Centralized visibility Analyze storage across multiple AWS accounts and regions.
30-day metrics history Retains daily metrics for up to 30 days (free) or 15 months (advanced).
Dashboards View pre-built or custom dashboards.
Exportable reports Export metrics in CSV or Parquet format to an S3 bucket.
Data aggregation Metrics can be aggregated by organization, account, region, bucket, or prefix.

3. Default Dashboard

  • Automatically created and pre-configured by AWS.
  • Shows metrics across all accounts and regions.
  • Can be disabled, but not deleted.
  • Provides metrics like:

    • Total storage bytes
    • Object count
    • Average object size
    • Number of buckets
    • Cost and protection trends

4. Types of Metrics

S3 Storage Lens groups metrics into several categories:

a. Summary Metrics

  • Purpose: High-level view of usage.
  • Examples:

    • StorageBytes – total storage size.
    • ObjectCount – number of stored objects.
  • Use Case: Identify fastest-growing or inactive buckets/prefixes.


b. Cost Optimization Metrics

  • Purpose: Identify ways to lower cost.
  • Examples:

    • NonCurrentVersionStorageBytes – space used by old versions.
    • IncompleteMultipartUploadBytes – space used by failed uploads.
  • Use Case:

    • Find unused data.
    • Transition objects to cheaper storage classes.

c. Data Protection Metrics

  • Purpose: Ensure compliance and best practices.
  • Examples:

    • VersioningEnabledBucketCount
    • MFADeleteEnabledBucketCount
    • SSEKMSBucketCount
    • ReplicationRuleCount
  • Use Case: Detect buckets missing versioning or encryption.


d. Access Management Metrics

  • Purpose: Understand object and bucket ownership settings.
  • Use Case: Review ownership models for security and compliance.

e. Event Metrics

  • Purpose: Track which buckets use S3 Event Notifications.
  • Use Case: Audit automation and event-driven processes.

f. Performance Metrics

  • Purpose: Track S3 Transfer Acceleration usage.
  • Use Case: Identify which buckets benefit from acceleration.

g. Activity Metrics

  • Purpose: Track S3 request activity and HTTP responses.
  • Examples:

    • GET, PUT, DELETE request counts
    • Bytes uploaded/downloaded
    • HTTP status codes (200 OK, 403 Forbidden, etc.)
  • Use Case: Monitor access patterns and detect abnormal behavior.


5. Free vs. Advanced Metrics

Feature Free Tier Advanced Tier (Paid)
Metrics count ~28 usage metrics + advanced cost, activity, data protection metrics
Retention 14 days 15 months
Prefix-level metrics
CloudWatch integration ✅ (no extra charge)
Recommendations ✅ Intelligent insights & cost-saving suggestions

6. Integration & Output

  • Storage Lens → CloudWatch: Advanced metrics can appear in CloudWatch for alerting.
  • Storage Lens → S3: All reports can be exported in .CSV or .Parquet for analysis with Athena, QuickSight, or Excel.
  • Storage Lens → Organizations: Aggregates data across multiple accounts centrally.

7. Common Use Cases

Goal How Storage Lens Helps
Reduce costs Identify unused data or old versions to transition or delete.
Improve security Detect buckets without encryption, replication, or MFA Delete.
Optimize performance Analyze access frequency and prefix-level traffic.
Monitor compliance Track versioning, ACLs, and replication policies.
Governance Aggregate data across the entire organization or business unit.

8. Key Facts for the Exam

  • Default dashboard = cross-account, cross-region, cannot be deleted.
  • Free vs. Paid – free = 14 days, advanced = 15 months, with CloudWatch + recommendations.
  • Data export supported in CSV or Parquet.
  • Includes cost, protection, activity, and performance metrics.
  • Aggregation can be done by Org → Account → Region → Bucket → Prefix.

Summary

S3 Storage Lens provides a centralized, analytics-driven view of your S3 usage and activity.
It helps you:

  • Detect anomalies,
  • Enforce best practices, and
  • Optimize cost and data protection across your AWS organization.

Use free metrics for basic monitoring, and advanced metrics for deep insights, cost analysis, and CloudWatch integration.

Amazon S3 Object Encryption

Encryption in Amazon S3 protects your data at rest and in transit.

There are four methods of encryption for objects stored in S3:

  1. SSE-S3 – Server-Side Encryption with S3-managed keys
  2. SSE-KMS – Server-Side Encryption with KMS-managed keys
  3. SSE-C – Server-Side Encryption with customer-provided keys
  4. Client-Side Encryption – Client encrypts before upload

1. Server-Side Encryption (SSE)

Server-side means:

AWS encrypts your data after receiving it and before saving it to disk — and decrypts it when you download it.


a. SSE-S3 (Server-Side Encryption with S3 Managed Keys)

Description:

  • Encryption keys are fully managed and owned by AWS.
  • You do not control or view the encryption keys.
  • Uses AES-256 encryption standard.

Header required:

x-amz-server-side-encryption: AES256
Enter fullscreen mode Exit fullscreen mode

Default behavior:
Enabled by default for new buckets and new objects.

Diagram:

Client → Uploads file → S3 encrypts with S3-managed key → Stores encrypted object
Enter fullscreen mode Exit fullscreen mode

Use Case:

  • Default and simplest encryption method.
  • No configuration or key management required.

b. SSE-KMS (Server-Side Encryption with KMS Managed Keys)

Description:

  • Uses AWS KMS to manage keys (Key Management Service).
  • You can create, rotate, and audit key usage through CloudTrail.
  • Allows fine-grained control over encryption/decryption access.

Header required:

x-amz-server-side-encryption: aws:kms
Enter fullscreen mode Exit fullscreen mode

Workflow:

Client uploads → S3 requests data key from KMS → Encrypts object using that key → Stores encrypted object
Enter fullscreen mode Exit fullscreen mode

Access Requirement:
To download or access the object:

  • The user must have S3 permissions and KMS key permissions.

Auditability:

  • All KMS operations (encrypt/decrypt) are logged in CloudTrail.

Performance Consideration (Exam Tip):

  • KMS API calls (GenerateDataKey, Decrypt) count toward KMS API quotas (5,000–30,000 req/sec by region).
  • High-throughput S3 workloads may require increased KMS quota via Service Quotas console.

Use Case:

  • Compliance and audit-focused environments.
  • When you need to control who can decrypt data.

c. SSE-C (Server-Side Encryption with Customer-Provided Keys)

Description:

  • You provide your own encryption key to AWS with every upload/download request.
  • AWS uses it temporarily to encrypt/decrypt objects, then discards it (never stored).
  • You must provide the same key again to read the object.

Requirements:

  • HTTPS only (key sent securely).
  • Key is passed using custom headers.

Workflow:

Client uploads file + key → S3 encrypts server-side using provided key → Discards key
Client must re-supply same key to decrypt object
Enter fullscreen mode Exit fullscreen mode

Use Case:

  • When you manage your own encryption keys outside AWS but still want server-side processing.

Comparison Summary:

Type Key Managed By Encryption Type Header Access Requirements
SSE-S3 AWS (S3) AES-256 x-amz-server-side-encryption: AES256 S3 permissions only
SSE-KMS AWS KMS (user-controlled) KMS Key x-amz-server-side-encryption: aws:kms S3 + KMS permissions
SSE-C Customer Custom Key (HTTPS only) Provided manually S3 + external key

2. Client-Side Encryption

Description:

  • The client encrypts data locally before uploading to S3.
  • AWS never sees the plaintext data or keys.
  • Decryption happens client-side after download.

Implementation:

  • Typically done with AWS SDK or Client-Side Encryption Library.

Workflow:

Client → Encrypts locally using client-managed key → Uploads encrypted file to S3
Enter fullscreen mode Exit fullscreen mode

Use Case:

  • Maximum control and compliance where data must never be handled unencrypted by AWS.

3. Encryption In Transit (SSL/TLS)

Definition:

  • Protects data as it travels between clients and Amazon S3.

Endpoints:

  • HTTPNot encrypted
  • HTTPSEncrypted with SSL/TLS

Recommended: Always use HTTPS.

Mandatory For: SSE-C (since key is transmitted in headers).

To Enforce HTTPS:
Use a bucket policy to deny unencrypted requests:

{
  "Sid": "DenyUnEncryptedTransport",
  "Effect": "Deny",
  "Principal": "*",
  "Action": "s3:GetObject",
  "Resource": "arn:aws:s3:::your-bucket-name/*",
  "Condition": {
    "Bool": {"aws:SecureTransport": "false"}
  }
}
Enter fullscreen mode Exit fullscreen mode

Result:
Only requests using HTTPS (SecureTransport = true) are allowed.


4. Summary of Encryption Types

Type Location Key Ownership Recommended Use Case
SSE-S3 Server-side AWS Default, simple encryption
SSE-KMS Server-side AWS KMS (you control access) Auditing, compliance
SSE-C Server-side Customer BYOK, external key management
Client-Side Client-side Customer Maximum control, no AWS involvement
In-Transit (TLS) Network AWS & Client Always enabled / enforce via policy

Key Takeaways for AWS Exam

  • SSE-S3 → Default, simplest, AES-256, no key management.
  • SSE-KMS → Uses KMS CMKs, CloudTrail logging, throttling limits possible.
  • SSE-C → Customer-provided keys, HTTPS required, AWS discards key.
  • Client-Side → Encrypt/decrypt fully managed by customer.
  • In-Transit Encryption → Enforced using bucket policy (aws:SecureTransport).

Hands-On Lab: Practicing Amazon S3 Encryption

This lab demonstrates how to configure and verify default and object-level encryption in Amazon S3 using SSE-S3 and SSE-KMS (including DSSE-KMS).


1. Create a Bucket

  1. Navigate to Amazon S3 Console → Create bucket.
  2. Name the bucket (e.g., demo-encryption-stephane-v2).
  3. Configure:
  • VersioningEnable (important for version tracking).
  • Default Encryption → choose one of:

    • SSE-S3 (AES-256)
    • SSE-KMS
    • DSSE-KMS (Double encryption with two KMS keys)
      1. Click Create bucket.

✅ Result: The bucket now has default server-side encryption enabled.


2. Verify SSE-S3 Encryption

  1. Upload a file → e.g., coffee.jpg.
  2. After upload, click the file → open Properties tab.
  3. Scroll to Server-side encryption → verify:
   SSE-S3 (Amazon S3 managed keys - AES-256)
Enter fullscreen mode Exit fullscreen mode

✅ S3 automatically encrypted your object using its internal managed key (SSE-S3).


3. Change Encryption for an Existing Object

  1. Select the uploaded object → click Edit under Server-side encryption.

  2. Choose a new encryption option:

  • SSE-KMS
  • DSSE-KMS
  1. For SSE-KMS, you’ll be asked to specify a KMS key:
  • Choose AWS-managed key: aws/s3
  • Or select a customer-managed key if created in KMS.
  1. Click Save changes.

✅ Result:
A new version of the file is created (thanks to versioning).
The latest version is encrypted using SSE-KMS with the selected key.


4. Verify the Encryption Type

  • Go to Versions tab for your object.
  • You’ll see:

    • Old version: SSE-S3
    • New version: SSE-KMS

Under Properties →
Server-side encryption: aws:kms
KMS Key ID: arn:aws:kms:...:key/aws/s3


5. Upload an Object with Custom Encryption

  1. Upload another file (e.g., beach.jpg).
  2. Expand Properties → Encryption before uploading.
  3. Choose from:
  • SSE-S3 (Default)
  • SSE-KMS
  • DSSE-KMS

✅ You can override the bucket’s default encryption for specific objects.


6. Check Default Encryption Settings

  1. Open your bucket → Properties → Default EncryptionEdit.
  2. Choose:
  • SSE-S3
  • SSE-KMS
  • DSSE-KMS
    1. (Optional) For SSE-KMS, enable Bucket Key (reduces KMS API costs).

Bucket Key caches data keys locally to reduce calls to KMS — useful for large-scale operations.


7. Notes on SSE-C and Client-Side Encryption

Type Where to Configure Key Management How to Enable
SSE-C Only via CLI/API Customer provides key Must use HTTPS and include key in headers
Client-Side Managed externally Client owns key & encryption process Encrypt before upload, decrypt after download

Console Limitation:
SSE-C and Client-side encryption cannot be configured through the AWS Management Console.


8. Summary of What You Learned

Task Method Key Learning
Enable bucket encryption SSE-S3 / SSE-KMS Secure all new objects automatically
Verify object encryption Check object properties Identify encryption type and KMS key
Override encryption Object-level settings Control encryption per file
Enable versioning Required for encryption edits Keeps encrypted object history
Cost optimization Use Bucket Key Reduce KMS API calls
CLI-only methods SSE-C, Client-side Offer maximum control and security

Key Takeaways for Exam and Real Projects

  • SSE-S3 = simplest and default encryption (AES-256).
  • SSE-KMS = managed by KMS, supports audit via CloudTrail.
  • DSSE-KMS = double encryption (higher security).
  • SSE-C = customer-supplied key (CLI/API only, HTTPS required).
  • Client-side = encrypt/decrypt outside AWS (full control).
  • Versioning is crucial for managing encryption changes.
  • Bucket Key reduces KMS costs for large workloads.

Here’s a short, clear, structured summary of your lecture on Default Encryption vs. Bucket Policies — concise enough for quick review or slides:


Default Encryption vs. Bucket Policies in S3

1. Default Encryption

  • Enabled by default for all new buckets using SSE-S3 (AES-256).
  • Automatically encrypts all new objects stored in the bucket.
  • You can change the default to SSE-KMS or DSSE-KMS as needed.
  • Managed entirely by AWS — no need to modify client requests.

Purpose:
Ensures all new uploads are automatically encrypted, even if users forget to specify encryption settings.


2. Bucket Policy for Encryption Enforcement

  • Bucket policies can enforce stricter encryption rules.
  • They deny uploads that don’t include the required encryption headers.

Example:

{
  "Sid": "DenyUnEncryptedUploads",
  "Effect": "Deny",
  "Principal": "*",
  "Action": "s3:PutObject",
  "Resource": "arn:aws:s3:::your-bucket-name/*",
  "Condition": {
    "StringNotEquals": {
      "s3:x-amz-server-side-encryption": "aws:kms"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Result:
Any upload without SSE-KMS encryption is rejected.


3. Evaluation Order

  • Bucket Policies are evaluated first, before default encryption settings.
  • If a bucket policy denies a request (e.g., missing encryption header), S3 will not apply the default encryption.

Key Takeaways

Feature Purpose Who Enforces When Applied
Default Encryption Automatically encrypts new objects S3 Service After upload
Bucket Policy Enforces or denies uploads based on rules IAM Engine Before upload

In short:
Default encryption automates protection,
Bucket policies enforce compliance.

Amazon S3 and CORS (Cross-Origin Resource Sharing)

1. What is CORS?

CORS = Cross-Origin Resource Sharing
It’s a web browser security mechanism that controls whether one website (origin) can make requests to another website (cross-origin).


2. Understanding “Origin”

An origin = scheme + host + port

Example:
https://www.example.com:443

  • Scheme: https
  • Host: www.example.com
  • Port: 443 (default for HTTPS)

Two URLs are same-origin only if all three match.
Otherwise, it’s a cross-origin request.


3. Why CORS Exists

CORS prevents malicious websites from reading data from another site without permission.

If a browser tries to load resources (like images, JS, or APIs) from another domain:

  • It sends a pre-flight OPTIONS request
  • The other domain must reply with specific CORS headers

4. The CORS Handshake

Step Action
1️⃣ Browser sends an OPTIONS request to the cross-origin server.
2️⃣ It includes the Origin header → identifies where the request came from.
3️⃣ The target server replies with CORS headers, such as:
Access-Control-Allow-Origin: https://www.example.com
Access-Control-Allow-Methods: GET, PUT, DELETE
4️⃣ If approved, browser proceeds with the actual request.

5. CORS in Amazon S3

Scenario:

  • my-website-bucket → hosts index.html
  • my-assets-bucket → hosts images or static content

When a web browser loads index.html and tries to fetch an image from my-assets-bucket, the S3 assets bucket must allow requests from the website bucket.


6. Configuring CORS in S3

In the S3 bucket (destination)
Permissions → CORS Configuration → Add rules

Example Configuration:

<CORSConfiguration>
  <CORSRule>
    <AllowedOrigin>https://www.example.com</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <AllowedMethod>DELETE</AllowedMethod>
    <AllowedHeader>*</AllowedHeader>
  </CORSRule>
</CORSConfiguration>
Enter fullscreen mode Exit fullscreen mode

Wildcard Option:
To allow all origins (not secure for production):

<AllowedOrigin>*</AllowedOrigin>
Enter fullscreen mode Exit fullscreen mode

7. When to Use CORS with S3

  • When your frontend (React, Vue, Angular) hosted on one bucket or domain accesses data from another bucket or API.
  • For static websites hosted in multiple S3 buckets.
  • For web apps calling S3 directly from the browser using SDKs (like AWS SDK for JavaScript).

8. Key Takeaways

Concept Description
CORS Browser security feature controlling cross-origin requests
Access-Control-Allow-Origin Defines which origins can access resources
Pre-flight Request Browser check before making a cross-origin request
S3 Use Case Allow static websites or apps to fetch assets from another bucket
Exam Tip If a question mentions browser + S3 + access denied (CORS) → configure CORS headers

Here’s a clear and structured summary of your S3 CORS Hands-On Lab — ideal for AWS certification prep and practical DevOps reference:


Hands-On Lab: Practicing CORS (Cross-Origin Resource Sharing) in S3

This lab demonstrates how CORS errors occur between two S3 buckets and how to fix them using CORS configuration.


1. Step 1 — Enable CORS Demo in index.html

In your existing website project:

  • Open index.html
  • Uncomment the CORS section:

    • Remove comment markers before <div> (around line 13)
    • Remove comment markers after the closing </script>
  • This enables a JavaScript fetch() request that will try to load another HTML file (extra-page.html).

Effect:
When working correctly, your website will show:

Hello world! I love coffee.
[Image]
This extra page has been successfully loaded.
Enter fullscreen mode Exit fullscreen mode

2. Step 2 — Upload the Files

Upload both files to your primary bucket:

  • index.html
  • extra-page.html

Then open the S3 website endpoint (Properties → Static website hosting → Endpoint URL).
You should see:
✅ “Extra page successfully loaded” — same-origin request works fine.


3. Step 3 — Create a Cross-Origin Scenario

To trigger a CORS violation, you’ll host the extra page on another bucket (different region/domain).

Create a second bucket:

  • Name: demo-other-origin-stephane
  • Region: different from the first (e.g., Canada)
  • Disable “Block all public access”
  • Enable Static website hosting
  • Set index.html as the default document.

Upload only:

  • extra-page.html

✅ Open the Object URL and confirm the extra page loads directly.


4. Step 4 — Break the Fetch

In the first bucket:

  • Edit index.html so that the fetch URL now points to the other bucket’s website endpoint, for example:
  fetch("http://demo-other-origin-stephane.s3-website-ca-central-1.amazonaws.com/extra-page.html")
Enter fullscreen mode Exit fullscreen mode

Upload this updated file back to the first bucket.

Now, visit your main site again.
You’ll get:
🚫 Error in Developer Console:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource.
Missing Access-Control-Allow-Origin header.
Enter fullscreen mode Exit fullscreen mode

5. Step 5 — Add CORS Rules to the Other-Origin Bucket

Go to your second bucket (demo-other-origin-stephane) →
Permissions → CORS Configuration → Edit.

Paste this JSON:

[
  {
    "AllowedHeaders": ["*"],
    "AllowedMethods": ["GET"],
    "AllowedOrigins": ["http://<your-first-bucket-website-endpoint>"],
    "ExposeHeaders": []
  }
]
Enter fullscreen mode Exit fullscreen mode

Replace <your-first-bucket-website-endpoint> with your actual HTTP URL (no trailing slash).

Save changes.


6. Step 6 — Verify Success

  • Go back to your main site and refresh.
  • You should now see:
  Hello world! I love coffee.
  [Coffee image]
  This extra page has been successfully loaded.
Enter fullscreen mode Exit fullscreen mode

In Developer Tools → Network tab, open the extra-page.html request and check:

Access-Control-Allow-Origin: http://your-first-bucket-website-endpoint
Access-Control-Allow-Methods: GET
Enter fullscreen mode Exit fullscreen mode

✅ The CORS policy now allows your cross-origin request.


7. Summary

Step Action Result
1 Enable JavaScript fetch() in HTML Prepare CORS request
2 Upload both pages to one bucket Works (same-origin)
3 Move one page to another bucket Triggers CORS error
4 Observe blocked request in browser “Missing CORS header”
5 Add CORS JSON config Allows cross-origin access
6 Verify in Developer Tools CORS headers visible

Key Takeaways

  • CORS = Cross-Origin Resource Sharing
  • Used to allow browser-based requests between different domains/buckets.
  • AllowedOrigin specifies who can access your bucket.
  • CORS errors occur in browsers, not in AWS SDKs.
  • Always configure CORS in the destination bucket (the one being requested).

Amazon S3 MFA Delete

1. What is MFA Delete?

MFA (Delete) = Multi-Factor Authentication for critical delete operations in S3.

It adds an extra security layer requiring a one-time MFA code (from a phone app or hardware token) before certain destructive actions can be performed on versioned buckets.


2. When MFA is Required

MFA Delete protects against accidental or malicious data loss by enforcing MFA for:

Operation MFA Required? Reason
Permanently delete an object version Yes Prevents irreversible data loss
Suspend Versioning Yes Prevents disabling version protection
Enable Versioning No Safe operation
List object versions / view deleted versions No Read-only action

3. Prerequisites

  • Bucket must have Versioning enabled.
  • Only the root AWS account (not an IAM user) can enable or disable MFA Delete.
  • Requires a configured MFA device (virtual or hardware).

4. How It Works

1️⃣ Root user enables MFA Delete on a versioned bucket.
2️⃣ When someone tries to permanently delete a version or suspend versioning,
they must supply both:

  • The root credentials
  • A valid MFA code

3️⃣ If MFA code is missing or invalid → the request fails.


5. Use Cases

  • Protect critical data buckets from:

    • Accidental permanent deletion
    • Malicious insider actions
    • Automation scripts deleting versions without review

6. Enable MFA Delete (High-Level CLI Steps)

Only root user can run this.

aws s3api put-bucket-versioning \
  --bucket my-secure-bucket \
  --versioning-configuration Status=Enabled,MFADelete=Enabled \
  --mfa "arn:aws:iam::<account-id>:mfa/root-account-mfa-device <mfa-code>"
Enter fullscreen mode Exit fullscreen mode

Tip: You cannot enable MFA Delete from the AWS Management Console — only through the CLI.


Key Takeaways

Concept Summary
Purpose Prevent accidental or malicious permanent deletions
Requires Bucket versioning + Root MFA device
Enabled by Root user only (via CLI)
Protects Object versions and versioning configuration
Not for Regular object uploads, reads, or soft deletes

Hands-On Lab: Enabling and Using MFA Delete in Amazon S3

This lab demonstrates how to enable MFA Delete, understand its behavior, and confirm it protects against permanent deletions in S3.


1. Objective

Add an extra layer of protection on an S3 versioned bucket using Multi-Factor Authentication (MFA) to prevent accidental or malicious permanent deletions.


2. Step 1 — Create a Versioned S3 Bucket

  1. Go to Amazon S3 Console → Create bucket
  2. Name it something like: demo-stephane-mfa-delete-2020
  3. Choose region: eu-west-1
  4. Enable Bucket Versioning
  5. Leave encryption and permissions as default
  6. Click Create bucket

Result:
Versioning is enabled, but MFA Delete is disabled (cannot be set in the console).


3. Step 2 — Set Up MFA for the Root Account

Because only the root user can enable MFA Delete:

  1. Log in as the root account
  2. Go to My Security Credentials
  3. Under Multi-Factor Authentication (MFA):
  • Click Assign MFA device
  • Choose either:

    • Virtual MFA device (e.g., Google Authenticator)
    • Hardware token
      1. Complete setup and copy the MFA ARN, e.g.:
   arn:aws:iam::<account-id>:mfa/root-account-mfa-device
Enter fullscreen mode Exit fullscreen mode

4. Step 3 — Configure AWS CLI for the Root User

⚠️ Caution: Never use or store root credentials for normal work — only for this lab.

  1. Create temporary access keys for the root user
    (download the .csv — delete them after the demo)

  2. Configure AWS CLI:

   aws configure --profile root-mfa-delete-demo
Enter fullscreen mode Exit fullscreen mode
  1. Enter:
  • Access key ID
  • Secret access key
  • Default region: eu-west-1
  1. Test setup:
   aws s3 ls --profile root-mfa-delete-demo
Enter fullscreen mode Exit fullscreen mode

✅ Should list your S3 buckets.


5. Step 4 — Enable MFA Delete via CLI

MFA Delete can only be enabled using the AWS CLI.

Run:

aws s3api put-bucket-versioning \
  --bucket demo-stephane-mfa-delete-2020 \
  --versioning-configuration Status=Enabled,MFADelete=Enabled \
  --mfa "arn:aws:iam::<account-id>:mfa/root-account-mfa-device <mfa-code>" \
  --profile root-mfa-delete-demo
Enter fullscreen mode Exit fullscreen mode

Replace <mfa-code> with your current 6-digit code from your authenticator.

Expected Output: Command succeeds silently.

Verify:

  • In S3 Console → Properties → Bucket Versioning → You should see Versioning: Enabled and MFA Delete: Enabled

6. Step 5 — Test MFA Delete Protection

  1. Upload an object (e.g., coffee.jpg)
  2. Delete the object
  • Creates a delete marker (since versioning is on)
    1. Try to permanently delete a specific version ID → You’ll see:
   Error: Access Denied. MFA Delete is enabled for this bucket.
Enter fullscreen mode Exit fullscreen mode

Confirmed: MFA Delete is blocking destructive actions.


7. Step 6 — Disable MFA Delete

To disable:

aws s3api put-bucket-versioning \
  --bucket demo-stephane-mfa-delete-2020 \
  --versioning-configuration Status=Enabled,MFADelete=Disabled \
  --mfa "arn:aws:iam::<account-id>:mfa/root-account-mfa-device <mfa-code>" \
  --profile root-mfa-delete-demo
Enter fullscreen mode Exit fullscreen mode

✅ Verify again in S3 → Properties → Bucket Versioning
Now it shows MFA Delete: Disabled


8. Step 7 — Clean Up

  • Delete test files and the bucket (optional)
  • Go to IAM → Security Credentials and deactivate/delete your root access keys ⚠️ This is essential for account security.

Key Takeaways

Feature Description
MFA Delete Adds extra protection requiring MFA for destructive actions
Enabled by Root user only (CLI required)
Prerequisite Versioning must be enabled
Protected Actions Permanent object deletes, suspending versioning
Cannot enable in Console Only via CLI
Best Practice Use temporarily and delete root access keys afterward

Hands-On Lab: Configuring and Reviewing Amazon S3 Access Logs

Amazon S3 Access Logs let you record and audit all requests made to a specific S3 bucket.
These logs help track access patterns, investigate incidents, and verify security or compliance.


1. Objective

Set up server access logging for an existing bucket and review how AWS delivers those logs to a dedicated logging bucket.


2. Step 1 — Create a Logging Bucket

  1. In the S3 Console, click Create bucket.
  2. Name it something like:
   stephane-access-logs-v3
Enter fullscreen mode Exit fullscreen mode
  1. Keep it in the same region (e.g., eu-west-1).
  2. Leave all other settings default and create the bucket.

✅ This bucket will store the access logs for other buckets.
💡 Best practice: Never enable logging into the same bucket you are monitoring — it would create a logging loop and rapidly grow storage costs.


3. Step 2 — Enable Logging on a Source Bucket

  1. Open the source bucket (the one you want to monitor).
  2. Go to the Properties tab → Server access loggingEdit.
  3. Choose:
  • Enable logging
  • Destination bucket: stephane-access-logs-v3
  • (Optional) Prefix: logs/ to organize log files under a folder
  • Leave the Log object key format as default
    1. Click Save changes.

✅ AWS automatically updates the destination bucket policy to let the logging.s3.amazonaws.com service write logs.


4. Step 3 — Verify the Updated Bucket Policy

In the logging bucket:

  • Go to Permissions → Bucket Policy
  • You’ll see a statement allowing the S3 Logging Service to PutObject into this bucket.

Example:

{
  "Effect": "Allow",
  "Principal": { "Service": "logging.s3.amazonaws.com" },
  "Action": "s3:PutObject",
  "Resource": "arn:aws:s3:::stephane-access-logs-v3/*"
}
Enter fullscreen mode Exit fullscreen mode

✅ This confirms S3 is authorized to deliver access logs.


5. Step 4 — Generate Activity

Perform a few actions on the source bucket:

  • Upload or open objects (coffee.jpg, etc.)
  • List objects in the console
  • View object properties

Each request (GET, PUT, DELETE, LIST, HEAD) creates a log entry.


6. Step 5 — View the Logs

After 1–3 hours (delivery is asynchronous):

  1. Open the logging bucketObjects.
  2. You’ll see log files such as:
   2025-10-27-12-34-56-UUID
Enter fullscreen mode Exit fullscreen mode
  1. Open one log file → you’ll see lines like:
   79a1b2f3 my-bucket [27/Oct/2025:13:02:44 +0000] 192.0.2.10 requester REST.PUT.OBJECT coffee.jpg "200" -
Enter fullscreen mode Exit fullscreen mode

Each line records:

  • Bucket name and timestamp
  • Requester (IP or IAM user)
  • Operation (GET, PUT, DELETE, etc.)
  • Object key
  • HTTP status code
  • Request size and response size

✅ Logs can later be analyzed using Athena, Glue, or CloudWatch Logs.


7. Step 6 — Verify Correct Setup

  • Source bucket → Properties → Server Access Logging: ✅ Enabled
  • Logging bucket → Permissions: ✅ Policy grants logging.s3.amazonaws.com access
  • Log objects arrive periodically → ✅ Confirmed

8. Cleanup (Optional)

  • Disable logging or delete log files to save storage.
  • Keep the logging bucket for future audits if needed.

Key Takeaways

Feature Purpose
S3 Access Logs Record all requests made to a bucket
Destination bucket Must be in the same region as the source
Automatic policy update Grants the S3 logging service permission to write logs
Delivery time Logs are delivered every few hours (asynchronously)
Never log to same bucket Avoid infinite loop and high costs
Analysis tools Athena, Glue, CloudWatch, or any text parser

Top comments (0)