DEV Community

Joshua Gilless
Joshua Gilless

Posted on • Originally published at joshuagilless.com

How To Secure Your AWS S3 Bucket With Cloudflare

I've noticed a lot of people are hosting their website on AWS S3, with Cloudflare in front of it. It's a great option because it's cheap and easy to maintain for a single developer. A static site is fairly secure by default due to the small attack surface.

S3 lets you host websites from the bucket, with the condition that your bucket name is the same as the URL of the site you're hosting. For example, my website is https://www.joshuagilless.com, so to host a website at that location, the s3 bucket would be www.joshuagilless.com. This means that everything you put in the bucket has a predictable URL pattern. They have a fantastic guide for setting up static website properties.

In that guide, you're instructed to give your bucket public read access. That means that with public read access and a predictable URL pattern, any old joker off the street can go around your website and directly to your s3 bucket. The thing is, if you go on to the next guide in the series, setting your bucket up with a custom domain, it still instructs you to put a public read access bucket policy.

Next, in the Amazon guide series, you have instructions to add AWS Cloudfront. Cloudfront is a CDN and an excellent one at that. All of the tips in this post would apply to Cloudfront as well as Cloudflare. The thing about the instructions, they don't mention removing the public read access. So even the official guide is missing this piece of the security puzzle.

Security

Locking down the ACL (Access Control List) gives us all of the protections that Cloudflare promises.

Imagine a scenario where someone decides they want to cause you a large hosting bill and just start requesting objects directly from your S3 bucket. They could find the largest file you host and just request it over and over and over until you started getting a hefty bandwidth bill from AWS. Cloudflare has DDOS protection that would mitigate this.

So how do you do it?

  1. Upload things without allowing public read access.
  2. Remove the overall bucket rule allowing everything public read access.
  3. Create a bucket policy

The default uploading strategy is to not allow the public-read-access, but a lot of guides have mentioned overriding it so that you can access the files from a handy dandy URL. It's important to not override this so that when you try to access it, you get a 403 Forbidden error. If you're making a new bucket, just go with the defaults.

To create a bucket policy, AWS Gives you a walkthrough on adding bucket policies.

So what we need to do is give it a statement for PublicReadGetObject:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<your-domain>/*",
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": [
                        list,
                        of,
                        allowed,
                        ip,
                        addresses
                    ]
                }
            }
        }
    ]
}

There are two things you need to replace here:

  1. What's between the "aws:SourceIp" brackets.
  2. <your-domain> with your actual domain name. For example, I would replace <your-domain> with www.joshuagilless.com

For figuring out the updated list for Cloudflare, you can visit https://www.cloudflare.com/ips-v4 and https://www.cloudflare.com/ips-v6 and add those lists.

I learned that we can mix IPv4 and IPv6 addresses from the amazon docs on bucket policies in the section titled: "Allowing IPv4 and IPv6 Addresses". It doesn't matter which order you put them, just as long as they're formatted correctly.

Just Give Me Something to Copy-Paste

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<your-domain>/*",
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": [
                        "173.245.48.0/20",
                        "103.21.244.0/22",
                        "103.22.200.0/22",
                        "103.31.4.0/22",
                        "141.101.64.0/18",
                        "108.162.192.0/18",
                        "190.93.240.0/20",
                        "188.114.96.0/20",
                        "197.234.240.0/22",
                        "198.41.128.0/17",
                        "162.158.0.0/15",
                        "104.16.0.0/12",
                        "172.64.0.0/13",
                        "131.0.72.0/22",
                        "2400:cb00::/32",
                        "2606:4700::/32",
                        "2803:f800::/32",
                        "2405:b500::/32",
                        "2405:8100::/32",
                        "2a06:98c0::/29",
                        "2c0f:f248::/32"
                    ]
                }
            }
        }
    ]
}

And again, just replace <your-domain> with your actual bucket name and website URL.

Any time you're using a CDN in production, you can do something similar to this to limit access to your locations that you don't want exposed. The same concept of only allowing a CDN's IP addresses applies to all resources, not just the ones in an S3 bucket.

I hope you find this useful, it took me a while to realize that I should be doing this, so I left my bucket unsecured for way too long. Nobody likes an unsecured bucket.

Top comments (0)