DEV Community

Cover image for Amazon S3 Deep Dive (part 2-buckets)
Supratip Banerjee for AWS Community Builders

Posted on

Amazon S3 Deep Dive (part 2-buckets)

Object storage in Amazon S3

Within the final lesson, you learned that object storage could be a flat capacity structure where objects are put away in buckets. Objects are any piece of information put away inside a bucket. You moreover learned simply can make a pseudo organizer structure utilizing prefixes. In Amazon S3 protest capacity, you'll organize objects to mimic a pecking order by utilizing key title prefixes and delimiters. Prefixes and delimiters permit you to bunch comparative things to assist outwardly organize and effectively recover your information. Within the client interface, these prefixes grant the appearance of a organizer/ subfolder structure but in reality, the capacity is still a flat structure.

In the image below, you have a bucket called getting-started-with-s3. Inside the bucket there is an object called dolphins.jpg. To organize and group the oceanography data for the external vendor you created a logical hierarchy using the prefix ocean. Ocean looks like a subfolder but this is only to help make the structure readable.

In reality, the key name of the dolphin object is a little longer, allowing us to locate the ocean/dolphin.jpg object. The object still sits in one single flat-storage structure.

image

You can use prefixes to organize the data that you store in Amazon S3 buckets. A prefix value is similar to a directory name that enables you to group similar objects together in a bucket. When you programmatically upload objects, you can use prefixes to organize your data.

The prefix limits the results to only those keys that begin with the specified prefix. The delimiter causes a list operation to roll up all the keys that share a common prefix into a single summary list result.

The purpose of the prefix and delimiter parameters is to help you organize and then browse your keys hierarchically. To do this, first pick a delimiter for your bucket, such as slash (/), that doesn't occur in any of your anticipated key names. Next, construct your key names by concatenating all containing levels of the hierarchy, separating each level with the delimiter.

For example, if you were storing information about cities, you might naturally organize them by continent, then by country, then by province or state. Because these names don't usually contain punctuation, you might use slash (/) as the delimiter. The following examples use a slash (/) delimiter.

• Europe/France/Nouvelle-Aquitaine/Bordeaux
• North America/Canada/Quebec/Montreal
• North America/USA/Washington/Bellevue
• North America/USA/Washington/Seattle

If you stored data for every city in the world in this manner, it would become awkward to manage a flat key namespace. By using Prefix and Delimiter with the list operation, you can use the hierarchy you've created to list your data. For example, to list all the states in USA, set Delimiter='/' and Prefix='North America/USA/'. To list all the provinces in Canada for which you have data, set Delimiter='/' and Prefix='North America/Canada/'.

Listing objects using prefixes and delimiters

A list request with a delimiter lets you browse your hierarchy at just one level, skipping over and summarizing the (possibly millions of) keys nested at deeper levels. For example, assume that you have a bucket (ExampleBucket) with the following keys.

sample.jpg
photos/2006/January/sample.jpg
photos/2006/February/sample2.jpg
photos/2006/February/sample3.jpg
photos/2006/February/sample4.jpg

The sample bucket has only the sample.jpg object at the root level. To list only the root level objects in the bucket, you send a GET request on the bucket with "/" delimiter character. In response, Amazon S3 returns the sample.jpg object key because it does not contain the "/" delimiter character. All other keys contain the delimiter character. Amazon S3 groups these keys and returns a single CommonPrefixes element with prefix value photos/ that is a substring from the beginning of these keys to the first occurrence of the specified delimiter.

Bucket overview

Buckets are lasting holders that hold objects. You'll be able make between 1 and 100 buckets in each AWS account. You'll be able increment the bucket restrain to a most extreme of 1,000 buckets by submitting a service restrain increment. Bucket sizes are for all intents and purposes boundless so you do not ought to allocate a foreordained bucket measure the way you'd when making a capacity volume or parcel.

An Amazon S3 bucket may be a flexible capacity choice with the capacity to: have a inactive web location, hold form data on objects, and utilize life-cycle administration approaches to adjust form maintenance with bucket estimate and taken a toll.

image

Bucket limitations

Prior to creating an Amazon S3 bucket, there are some important restrictions and limitations that you should know.
Select each of the plus (+) symbols listed below to find out more about bucket limitations.

Bucket owner

Amazon S3 buckets are owned by the account that creates them and cannot be transferred to other accounts

Bucket names

Bucket names are globally unique. There can be no duplicate names within the entire S3
infrastructure.

Bucket renaming

Once created, you cannot change a bucket name.

Permanent entities

Buckets are permanent storage entities and only removable when they are empty. After deleting a bucket, the name becomes available for reuse by any account after 24 hours if not taken by another account.

Object storage limits

There’s no limit to the number of objects you can store in a bucket. You can store all of your objects in a single bucket, or organize them across several buckets. However, you can't create a bucket from within another bucket, also known as nesting buckets.

Object storage limits

There’s no limit to the number of objects you can store in a bucket. You can store all of your objects in a single bucket, or organize them across several buckets. However, you can't create a bucket from within another bucket, also known as nesting buckets.

Bucket creation limits

By default, you can create up to 100 buckets in each of your AWS accounts. If you need additional buckets, you can increase your account bucket limit to a maximum of 1,000 buckets by submitting a service limit increase.

Naming buckets

When naming buckets, carefully determine how you want to structure your bucket names and how they will function. Will you use them only for data storage or hosting a static website? Your bucket names matter to S3, and based on how you use the bucket, your bucket names and characters will vary. Bucket names are globally viewable and need to be DNS-compliant.

Here are the rules to follow when naming your buckets. Bucket names must:

• Be unique across all of Amazon S3
• Be between 3-63 characters long
• Consist only of lowercase letters, numbers, dots (.), and hyphens (-)
• Start with a lowercase letter or number
• Not begin with xn-- (beginning February 2020)
• Not be formatted as an IP address. (i.e. 198.68.10.2)
• Use a dot (.) in the name only if the bucket's intended purpose is to host an Amazon S3 static website; otherwise do not use a dot (.) in the bucket name

image

This identifies the bucket URL in the format of bucket name/ region endpoint.

Top comments (0)