Amazon S3 Multi Region Access Points provide a global endpoint for routing Amazon S3 request traffic between AWS Regions. Each global endpoint routes Amazon S3 data request traffic from multiple sources, including traffic originating in Amazon Virtual Private Clouds (VPCs), from on-premises data centers over AWS PrivateLink, and from the public internet without building complex networking configurations with separate endpoints. Establishing an AWS PrivateLink connection to an S3 Multi-Region Access Point allows you to route S3 requests into AWS, or across multiple AWS Regions and accounts over a private connection using a simple network architecture and configuration without the need to configure a VPC peering connection.
More Amazon documentation can be found here: https://aws.amazon.com/s3/features/multi-region-access-points/
AWS Multi Region Access Point Mini Project
(thanks to Adrian Cantrill for this mini-project)
We will create buckets in two different regions
Step One: Setup Buckets
- Open S3 Console under your profile
- Create 2 buckets in 1st region (each must have a unique name, no upper-case letters, appending random numbers is a good idea)
- Enable bucket versioning in each bucket
- In the left menu, select 'Multi-Region Access Point, then click 'Create Multi-Region Access Point'
- Give it a name that is unique to the AWS Accounts
- Add the Buckets
- Click 'Create Multi Region Access Point' at the bottom and wait for completion. Note. Can take up to 24 hours to complete, but typically only takes between 10–30 minutes
Step Two: Setup Replication
Next, we configure replication between the buckets
- Note the ARN (Amazon Resource Name) and Alias and copy them for possible later use
- Click Replication and Failover Tab
- Click the Replication button and note there is no replication
- Click the Failover Button and note that the two buckets are in 'Active/Active' Failover
- Scroll down and click "Create Replication Rules"
- Since we are 'Active/Active', we will use the 'Replicate Objects Among All Specified Buckets' template
- Click to select both Buckets
- The Scope can be limited by filters (beyond the scope of this project—experiment on your own), or applied to all objects in the bucket. Click 'Apply to all objects in the bucket' for this project.
- Accept the default checkboxes for 'Additional Replication Options', and click 'Create Replication Rules
- We will see that Replication is in place
Step Three: Testing Multi-Region Access
To Test Multi-Region Access:
- Go back to the main console page in AWS
- Select another region, not one of the two you configured
- Click Cloudshell to pull up a command line interface (Note: CloudShell is not available in every region, see here for a list of available regions you can use: https://docs.aws.amazon.com/general/latest/gr/cloudshell.html)
- Create a 10 MB Test file using the command:
dd if=/dev/urandom of=test1.file bs=1M count=10
- Upload it to the arn you created earlier using the command:
aws s3 cp test1.file s3://{insertyourarnhere}/
- Check your buckets, you will see the file in one and ultimately replicated to the second bucket. (Note: There is no set time for S3 replication to complete; it can take up to a couple of hours according to their documentation. You can enable Replication Time Control, which advertises 99.999% of objects replicated within 15 minutes, but there is a cost associated with that)
- Let's do another test—switch to another region that has Cloudshell and create another file, naming it test2.file:
dd if=/dev/urandom of=test2.file bs=1M count=10
- And upload it to the ARN:
aws s3 cp test2.file s3://{insertyourarnhere}/
- Open the two buckets in separate windows and see which region is first
- For the third test, Keep the two bucket regions open and pick a region that is mostly central to your two regions
- Create and upload a 3rd file (name it test3.file)
- See which site wins
For our 4th and final test, we're going to try to get an object, via our Multi-Region Access Point that has been created in one bucket, but our get request is routed to another bucket that has not had the file replicated yet.
- Open two Cloudshells, each one in the region for each bucket
- • In one region, create a new file as above, name it test4.file
- Enter the command to copy the file to the bucket, but do not execute it yet
- Go to the other Cloudshell and enter this command:
aws s3 cp s3://{insertyourarnhere}/test4.file . (that's a space and a period after the command)
- Go back to the first Cloudshell and run the command to copy the file to the bucket.
- Go to the other region and run the command you typed in. You should get a failure like so: fatal error: An error occurred (404) when calling the HeadObject operation: Key "test4.file" does not exist. That’s because the file hasn't replicated to that region yet.
This shows what can happen if you have replication enabled and an application calls a file in a region where it does not exist. If your application requires all objects available immediately, Multi-Region Access Points may not be the best solution; or at least the application should be able to handle 404 errors.
Step Four: Clean up AWS
- Head to the Multi-Region Access Points page in the S3 console and delete the Access point. You will need to wait for this to complete before you can delete your buckets
- Empty each S3 bucket and delete them
This concludes our mini-project on AWS Multi-Region Access.
Top comments (0)