- Basics of S3
- S3 is a universal namespace; bucket names must be unique globally
- When you upload a file to S3, you will receive a HTTP 200 code if the upload was successful
- S3 is object based—i.e. allows you to upload files
- Consists of:
- Key (name)
- Value (data)
- Version ID
- Metadata
- Subresources (Access Control Lists, Torrent)
- Consists of:
- Files can be from 0 bytes to 5TB; there is unlimited storage
- Files are stored in Buckets (equivalent to folders)
- Data consistency in S3
- Read after Write consistency for PUTS of new objects (you are able to read the file right after writing to it; http PUTS)
- Eventual Consistency for overwrite PUTS and DELETES (latest version will always be used)
- S3 guarantees:
- 99.99% availability for the S3 platform
- Amazon Guarantee 99.9% availability
- 99.999999999% durability for S3 information (11 x 9s)
- S3 features:
- Tiered Storage available
- Lifecycle Management
- Versioning
- Encryption
- MFA Delete
- S3 Storage Classes (Tiers)
- S3 Standard
- 99.99% availability, 99.999999999% durability, stored redundantly across multiple devices in multiple facilities, designed to sustain the loss of 2 facilities concurrently
- S3 - IA (Infrequently Accessed)
- For data that is accessed infrequently, but requires rapid access when needed
- S3 One Zone - IA
- When you want a lower cost option for IA data, but not require the multiple AZ data resilience
- S3 - Intelligent Tiering
- Automatically moving data to the most cost-effective access tier using ML
- S3 Glacier
- Secure, durable, and low-cost storage for data archiving; retrieval times configurable from minutes to hours
- S3 Glacier Deep Archive
- Lowest-cost storage class where a retrieval time of 12 hours is acceptable
- S3 Standard
- S3 charged based on:
- Storage
- Requests
- Storage Management Pricing (different tiers)
- Data Transfer Pricing
- Transfer Acceleration
- Enables fast, easy, and secure transfer of files over long distances between your end users and an S3 bucket. It takes advantage of Amazon CloudFront's globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path
- If users want to upload a large file to a bucket in London, they can upload it to an edge location and those edge locations will use amazon backbone networks to bring it into London
- Cross Region Replication
- Automatically replicates objects in a bucket across different regions
For further actions, you may consider blocking this person and/or reporting abuse
Top comments (0)