Need for static content
Static content remains the fastest way to deliver information to browsers. As a result, many vendors now focus on optimizing it through geographic distribution, caching, CDNs, and more. Amazon is a key player in this space, offering robust solutions for efficient static content delivery.
S3 bucket, brute force approach
At the core of AWS’s storage technology is the S3 bucket, a natural choice for hosting static content and delivering it directly to the browser.
My personal preference for creating and managing s3 buckets for static content via AWS console, manually, without scripting it in CDK -- just gives me a bit more peace of mind for bucket life cycle (when it's created and deleted).
To get started, sign into your AWS console, and go to the S3 bucket section.
- create a bucket with the name of your host. In my case it will be
wisaw.com
- in the
properties
tab, enable it forStatic website hosting
. - in the
permissions
tab, editBlock public access
and make sure all public access is enabled by unchecking all the check boxes. - in
RT53
section of the console, create or edit record typeA
for your domain (wisaw.com for me), and configure it as an alias to the S3 bucket (should be available in the dropdown list).
This was easy. You site should be working now, and the content delivered straight from S3
to your browser. However, I'm really not comfortable leaving the S3
bucket wide open for public access -- there has to be a better, more secure way.
Shooting for perfection -- closing all public access to S3 bucket
Go ahead and disable all public access to your bucket in console.
Before we create CloudFront
distribution in our CDK script, we need to get a references to few additional resources.
Reference S3 bucket:
const webAppBucket = s3.Bucket.fromBucketName(
this,
`wisaw.com`,
`wisaw.com`,
);
Few more things needed for CloudFront
:
// Use the ACM certificate
const cert = acm.Certificate.fromCertificateArn(
this,
"my_cert",
"arn:aws:acm:us-east-1:963958500685:certificate/cf8703c9-9c1b-4405-bc10-a0c3287ebb7e"
)
// Create cache policies
const basicCachePolicy = new cloudfront.CachePolicy(this, 'BasicCachePolicy', {
defaultTtl: cdk.Duration.days(10),
minTtl: cdk.Duration.days(10),
maxTtl: cdk.Duration.days(10),
enableAcceptEncodingGzip: true,
enableAcceptEncodingBrotli: true,
queryStringBehavior: cloudfront.CacheQueryStringBehavior.all(),
cookieBehavior: cloudfront.CacheCookieBehavior.all(),
});
// Create origin request policy that forwards all cookies and query strings
const allForwardPolicy = new cloudfront.OriginRequestPolicy(this, 'AllForwardPolicy', {
cookieBehavior: cloudfront.OriginRequestCookieBehavior.all(),
queryStringBehavior: cloudfront.OriginRequestQueryStringBehavior.all(),
headerBehavior: cloudfront.OriginRequestHeaderBehavior.none(),
});
And now, Create a new CloudFront
distribution in your CDK script:
const distribution = new cloudfront.Distribution(this, "wisaw-distro", {
priceClass: cloudfront.PriceClass.PRICE_CLASS_100,
defaultBehavior: {
origin: cloudfront_origins.S3BucketOrigin.withOriginAccessControl(webAppBucket),
compress: true,
cachePolicy: basicCachePolicy,
originRequestPolicy: allForwardPolicy,
viewerProtocolPolicy: cloudfront.ViewerProtocolPolicy.REDIRECT_TO_HTTPS, // Add this line
edgeLambdas: [
{
eventType: cloudfront.LambdaEdgeEventType.VIEWER_REQUEST,
functionVersion: redirectLambdaEdgeFunction.currentVersion,
includeBody: true,
},
],
},
certificate: cert,
domainNames: ["www.wisaw.com", "wisaw.com"],
minimumProtocolVersion: cloudfront.SecurityPolicyProtocol.TLS_V1_2_2021,
errorResponses: [
{
httpStatus: 403,
responseHttpStatus: 200,
ttl: cdk.Duration.days(365),
responsePagePath: "/index.html",
},
{
httpStatus: 404,
responseHttpStatus: 200,
ttl: cdk.Duration.days(365),
responsePagePath: "/index.html",
},
],
});
// Output the Distribution ID to use in the OAC bucket policy
new cdk.CfnOutput(this, "CloudFrontDistributionId", {
value: distribution.distributionId,
description: "Use this Distribution ID in the OAC bucket policy for wisaw.com"
})
Final steps
Add an alias A
record to your Rt53 hosting zone referencing the CloudFront
distro:
Add bucket policy in the permissions
tab for your s3 bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCloudFrontServicePrincipal",
"Effect": "Allow",
"Principal": {
"Service": "cloudfront.amazonaws.com"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::wisaw.com/*",
"Condition": {
"StringEquals": {
"AWS:SourceArn": "arn:aws:cloudfront::963958500685:distribution/E37CLMKVO7KBVG"
}
}
}
]
}
You can get the CloudFront
ARN from the aws console.
Sometimes there is still a need to open public access... briefly.
The static application I host is a React.js
single page app. The static content of the site is not changing frequently. But, when I need to redeploy the app (synchronize the bucket), I go to the console and enable public access for few minutes. Just don't forget to disable it again once you are done with your updates. I'm sure there is a better way, feel free to add a comment and share some cool techniques how to do it.
What if things don't work.
The example described here is a basic use case which should get you going. In real life your CDK script is expected to be a lot more complex, depending on your application. For instance, my app has few Lambda Edge
functions which write to the same bucket. These function do not need public access, since they are accessing your bucket from within your account VPC.
Usually you simply have to add a following line to you CDK:
// Grant the Lambda function permissions to read and write to the S3 bucket
webAppBucket.grantReadWrite(generateSiteMap_LambdaFunction);
While closing public access to my S3
bucket, I ran into a situation that my generateSiteMap_LambdaFunction
was not able to update the sitemap.xml stored in the bucket and was getting rejected access. I have researched tons of documentation on the web and spent few days trying different suggestions like updating bucket and function policies, configuring AIM access control etc... It turned out a silly bug that was driving me towards these complex solutions. The function writing to S3
bucket was passing one of the parameters:
ACL: "public-read"
Since all the public access was disabled -- the function was failing. Removing that parameter fixed the problem. I was able to solve this fairly easily while vibe coding with Github Copilot
. So, if you feel stuck and find yourself having to implement complex solutions for simple use cases -- there is probably a simpler answer.
Better safe than sorry.
Even if you think you have closed all public access to all your buckets, it's a good idea to still periodically check. AWS offers out of the box a tool, called IAM Access Analyzer for S3
.
Running it every once in a while is probably not a bad idea:
The complete code for this post --> https://github.com/echowaves/WiSaw.cdk
The web app hosted on S3 bucked --> https://wisaw.com
Another post in this series talking about redirects in AWS --> https://dev.to/dmitryame/redirect-www-to-root-in-aws-1ee4
Happy Hacking.
Top comments (0)