Part 1 — Create the bucket (base)
Create main bucket
- AWS Console → search S3
- Click Buckets → Create bucket
-
Bucket name:
jumptotech-lab-app-bucket-2026 - AWS Region: choose your region (example: us-east-2)
- Block Public Access settings: keep ON for now (safer).
- Click Create bucket
You now have your main bucket.
Part 2 — Bucket Versioning
Why DevOps uses it
- Rollback deleted/overwritten files (configs, artifacts)
- Required for replication
- Helps protect important objects (like backups)
Enable it (clicks)
- S3 → Buckets → click your bucket
- Go to Properties
- Scroll to Bucket Versioning
- Click Edit
- Select Enable
- Click Save changes
How to use / verify
- Go to Objects tab
- Click Upload → upload a file
app.zip - Upload the same name
app.zipagain (different content if possible) - In Objects list, enable Show versions
- You’ll see multiple versions of
app.zip
Part 3 — Default Encryption (SSE-S3 or SSE-KMS)
Why DevOps uses it
- Compliance/security: data encrypted at rest
- Many companies require KMS keys for audit controls
Enable default encryption (clicks)
- Bucket → Properties
- Scroll to Default encryption
- Click Edit
- Check Enable
- Choose one:
- SSE-S3 (simple, AWS-managed)
-
SSE-KMS (stronger control; uses KMS key)
- If SSE-KMS:
-
Choose AWS managed key (aws/s3) (easy start), or your CMK later
- Click Save changes
How to verify
- Upload a new file
- Click the file → look at Server-side encryption in details (it should show SSE-S3 or SSE-KMS)
Part 4 — Intelligent-Tiering + Archive (using Lifecycle Rules)
Intelligent-Tiering itself is a storage class. “Archive” happens through Archive Access tiers and/or lifecycle transitions (Glacier/Deep Archive).
Why DevOps uses it
- Cuts cost automatically for data that’s not used often (logs, backups)
Setup (clicks)
- Bucket → Management
- Lifecycle rules → click Create lifecycle rule
-
Lifecycle rule name:
int-tier-and-archive - Choose a rule scope
-
Select Apply to all objects in the bucket
- Scroll to Lifecycle rule actions
-
Check Transition current versions of objects between storage classes
- Set transitions:
After 0 days → transition to Intelligent-Tiering
After 30 days → transition to Glacier Flexible Retrieval (or Glacier Instant Retrieval)
-
After 90 days → transition to Deep Archive
- Click Create rule
How to verify
-
Lifecycle transitions don’t happen instantly. For teaching:
- Show students the rule exists and explain AWS transitions occur “later” based on time.
Part 5 — Server Access Logging
Why DevOps uses it
- Records requests to your bucket (who/what accessed)
- Useful for security audits and troubleshooting
Step A: create a log bucket (required)
- S3 → Buckets → Create bucket
- Name:
jumptotech-lab-s3-logs-2026 - Keep Block Public Access ON
- Create
Step B: enable access logging on main bucket (clicks)
- Open your main bucket
- Go to Properties
- Scroll to Server access logging
- Click Edit
- Select Enable
-
Target bucket: choose
jumptotech-lab-s3-logs-2026 -
Target prefix:
access-logs/ - Click Save changes
How to verify
- Use main bucket (upload/download a file)
- Wait a bit
- Open log bucket → you should see log files under
access-logs/
Part 6 — AWS CloudTrail Data Events (S3 object-level logging)
Why DevOps uses it
- Tracks API actions like GetObject, PutObject, DeleteObject
- This is the audit log many security teams require
Enable Data Events (clicks)
- AWS Console → search CloudTrail
- Click Trails
- If you have a trail already, click it. If not:
- Click Create trail
- Name:
org-trail(example) - Storage: choose/create S3 bucket for CloudTrail logs
-
Create trail
- Open your trail → click Edit
- Find Data events → click Add data event
- Data event type: choose S3
- Choose Specific S3 buckets
- Select your main bucket
jumptotech-lab-app-bucket-2026 - Choose event types:
Read events
-
Write events
- Save
How to verify
- Upload/delete/download an object in the bucket
- CloudTrail → Event history
- Filter:
- Event source:
s3.amazonaws.com - Look for
PutObject,GetObject, etc.
Part 7 — Event Notifications (S3 → SNS/SQS/Lambda)
Why DevOps uses it
-
Trigger automation when a file arrives:
- CI artifacts uploaded → trigger pipeline
- Image uploaded → trigger Lambda processing
Example beginner setup: S3 → SNS Topic
Step A: Create SNS topic
- AWS Console → search SNS
- Click Topics → Create topic
- Type: Standard
- Name:
s3-upload-topic - Create
Step B: Add S3 event notification
- Go to S3 bucket → Properties
- Scroll to Event notifications
- Click Create event notification
- Name:
on-upload - Event types: select All object create events
- Destination: choose SNS topic
- Pick
s3-upload-topic - Save
How to verify
- In SNS, create a subscription (email) to see messages:
- SNS → topic → Create subscription
- Protocol: Email
- Enter your email → Create
- Confirm email subscription
- Upload a file to S3 → you should get an email notification.
Part 8 — Amazon EventBridge integration
Why DevOps uses it
- Central event bus routing to many targets (Step Functions, Lambda, SQS)
- Better than many direct S3 notifications when you scale
Enable (clicks)
- S3 bucket → Properties
- Scroll to Amazon EventBridge
- Click Edit
- Enable Send events to EventBridge
- Save
Create a rule (clicks)
- AWS Console → search EventBridge
- Click Rules → Create rule
- Name:
s3-object-created-rule - Event bus: default
- Rule type: Rule with an event pattern
- Event source: AWS events
- AWS service: Simple Storage Service (S3)
- Event type: Object Created
- (Optional) filter by bucket name (if the UI allows)
- Target: choose SNS or Lambda
- Create rule
Verify
- Upload file → check target receives event (SNS email or Lambda logs)
Part 9 — Transfer Acceleration
Why DevOps uses it
- Faster global uploads (teams in other countries, large files)
Enable (clicks)
- S3 bucket → Properties
- Scroll to Transfer acceleration
- Click Edit
- Choose Enable
- Save
How to use
-
You upload using the accelerate endpoint:
https://<bucketname>.s3-accelerate.amazonaws.com
In CLI you can enable accelerate usage in certain tooling (advanced), but for beginners show the concept + endpoint.
Part 10 — Static Website Hosting
Why DevOps uses it
- Host simple frontend (HTML/JS) cheaply
- Used for training demos and static sites
Enable (clicks)
- S3 bucket → Properties
- Scroll to Static website hosting
- Click Edit
- Select Enable
- Hosting type: Host a static website
-
Index document:
index.html -
Error document:
error.html - Save
Upload website files
- Go to Objects tab
- Upload
- Upload
index.htmlanderror.html
Make it accessible (IMPORTANT)
Static website needs public read. For a beginner lab you can do it, but explain it’s not recommended for sensitive data.
Option A (simple lab, public bucket policy):
- Bucket → Permissions
- Block public access → Edit
- Uncheck Block all public access (lab only)
- Save (type confirm)
- Still in Permissions → Bucket policy → Edit
- Paste (replace bucket name):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadForWebsite",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::jumptotech-lab-app-bucket-2026/*"
}
]
}
- Save
How to open it
- Bucket → Properties
- Static website hosting section shows Website endpoint
- Open that URL in browser
Part 11 — Access Control List (ACL)
Why DevOps cares
- ACL is legacy. Many companies disable it to avoid confusion and security issues.
Where to see it / set it
- Bucket → Permissions
- Scroll to Access control list (ACL)
You will often see it disabled/limited when Object Ownership is “Bucket owner enforced”.
Part 12 — CORS (Cross-Origin Resource Sharing)
Why DevOps uses it
- Frontend hosted on one domain needs to call files from S3 (browser security)
- Common for web apps pulling images/files from S3
Configure (clicks)
- Bucket → Permissions
- Scroll to Cross-origin resource sharing (CORS)
- Click Edit
- Paste (beginner example):
[
{
"AllowedOrigins": ["*"],
"AllowedMethods": ["GET", "HEAD"],
"AllowedHeaders": ["*"],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
- Save
How to verify
- In real verification you test from a browser app. For beginners: explain “without CORS, browser blocks cross-domain requests.”
Part 13 — Object Ownership (recommended setting)
Why DevOps uses it
- Prevents “uploaded object is owned by someone else” problems
- Lets you disable ACLs and manage permissions with policies only
Set it (clicks)
- Bucket → Permissions
- Scroll to Object Ownership
- Click Edit
- Choose Bucket owner enforced (ACLs disabled)
- Save
This is the best practice for most modern setups.
Part 14 — Lifecycle rules (separate example: delete old logs)
You already created one lifecycle rule for tiering. Now add a second rule for cleanup.
Why DevOps uses it
- Automatically delete junk/old logs to control costs
Create rule (clicks)
- Bucket → Management
- Lifecycle rules → Create lifecycle rule
- Name:
delete-old-logs - Scope: apply to prefix
logs/(optional) or all objects - Actions:
- Check Expire current versions of objects
- Set 365 days
- Create rule
Part 15 — Replication rules (Cross-Region Replication)
Why DevOps uses it
- Disaster recovery
- Compliance (copy data to another region)
Requirements
- Versioning must be enabled (you already did)
- Need a destination bucket in another region
Step A: Create destination bucket
- S3 → Create bucket
- Name:
jumptotech-lab-app-bucket-2026-dr - Region: pick another region (example: us-east-1)
- Create bucket
- Enable Versioning on destination bucket too:
- Destination bucket → Properties → Versioning → Enable
Step B: Create replication rule
- Open source (main) bucket
- Go to Management
- Scroll to Replication rules
- Click Create replication rule
- Rule name:
replicate-to-dr - Choose Entire bucket (or prefix-based)
- Destination:
- Choose Bucket in another account/this account (usually this account)
-
Select destination bucket
...-dr- IAM role:
-
Choose Create new role (recommended)
- Encryption:
-
If using SSE-KMS, you must allow replication for KMS (advanced); for beginner lab SSE-S3 is easiest
- Create rule
Verify
- Upload a new object to source bucket
- Check destination bucket after a few minutes → object should appear
Part 16 — Inventory configurations
Why DevOps uses it
- Daily/weekly report of objects (CSV/Parquet)
- Helps audit, cost review, security checks
Setup (clicks)
- Bucket → Management
- Scroll to Inventory configurations
- Click Create inventory configuration
- Name:
daily-inventory - Scope: Current version only (or include versions)
- Destination bucket: choose your log bucket (or another inventory bucket)
- Destination prefix:
inventory/ - Frequency: Daily
- Output format: CSV
- Additional fields: select things like Size, Last modified, Storage class (helpful)
- Create
Verify
- Inventory file appears later (not immediate). Show students where it will land.
Part 17 — Create an Access Point
Why DevOps uses it
- Microservices can have different endpoints + policies for the same bucket
- Avoids giving broad bucket access
Create (clicks)
- S3 Console (left menu) → Access Points
- Click Create access point
- Access point name:
app-uploads-ap - Choose your bucket
jumptotech-lab-app-bucket-2026 - Network origin: Internet (for lab; VPC-only is production)
- (Optional) Add an access point policy (example: only allow uploads to
uploads/) - Create
How to use
- Applications can use the access point ARN/alias rather than the bucket name.
- In IAM, you grant permissions to the access point instead of bucket-wide access (cleaner).
Summary: what DevOps should remember
- Versioning = rollback + required for replication
- Encryption = compliance
- Lifecycle + Intelligent-Tiering + Archive = cost control
- Access logs + CloudTrail data events = audit & security
- Event notifications + EventBridge = automation
- Static website hosting + CORS = frontend hosting & browser access
- Object ownership (bucket owner enforced) = best practice, disable ACL confusion
- Replication = DR
- Inventory = reporting and governance
- Access points = microservice-friendly permissions
Top comments (0)