DEV Community

Cover image for From CLF-02 to Real-World AWS Solutions: Hosting my Website on AWS with S3, CloudFront and Beyond
Dibendu Saha
Dibendu Saha

Posted on • Edited on

From CLF-02 to Real-World AWS Solutions: Hosting my Website on AWS with S3, CloudFront and Beyond

A few months ago, if you had asked me about hosting a website on AWS, I would have probably given you a vague, half-baked answer. Little did I know, I was still far from mastering cloud deployment (and, in some ways, I still am). But to truly crack the code, your knowledge must be quantifiable. At the very least!

This realization led me to start implementing my knowledge by deploying real-world AWS solutions. And what better way to begin than by migrating my own portfolio website from Azure to AWS?

Choosing the Hosting Approach

AWS provides multiple ways to host an application, depending on your use case and budget. You could use EC2 instances, Elastic Beanstalk, ECS, or even AWS Amplify for frontend applications. However, for a simple React-based portfolio website, I wanted a serverless and highly available solution.

That’s where Amazon S3 shines. It’s an object storage service that allows us to host static websites with minimal effort. But hosting the files isn’t enough — we also need CloudFront for content delivery, Lambda for serverless code execution and Certificate Manager for SSL certificate management.

Prerequisites

In order for you to proceed, you need to have –

  • AWS root user account
  • AWS IAM user account

Think of the AWS root user account as a master account which has full administrative access to all AWS services and resources. AWS recommends creating an IAM user and giving it the appropriate privilege to perform all the day-to-day activities.

So, make sure you're logged in as the IAM user to proceed with the tasks below.

Let’s walk step by step, in detail.

S3 Static Website

This approach, in its barebone is pretty straightforward —

  1. Create the S3 bucket.
    • Define permissions.
    • Enable Static Website Hosting.
  2. Upload the build files.
  3. Access the application.

Let's go over it -

1. Create the bucket

  • From the AWS console, I searched for 'S3' and clicked on Create Bucket.
  • Entered the bucket name, and this name had to be unique globally, and not just within my selected region.
  • Block Public Access — Turned this off. In other words, enabled public access, given the obvious reason that my website should be accessible out from the internet.
  • Left other options as is and created the bucket.
  • Once created, I went to the newly created bucket and -

    1. Properties tab.
      • Enabled S3 static website hosting.
      • Hosting type – Host a static website.
      • As for the Index document, I set it to index.html (or whatever your application entry page is).
      • Error document is optional, but I've set it to index.html, as well. This serves the purpose of URL redirection since the application routes are managed internally by the React application. Save the changes.
    2. Next, I headed over to the Permissions tab.

      • This is where I set the Bucket Policy to allow read access to my S3 objects.
      • Added this bucket policy - Bucket policy
      • Make sure to append the wild-card path (/*) at the end of your resource. This allows all objects to be accessible inside your bucket.

        But you already turned off the Block Public Access settings. Why do you require an extra policy to allow access?

Well, turning off the Block Public Access settings just enables us to define the actual policy (or the actual permissions) without which we would end up with the following error –
Bucket policy error

2. Upload the build files
Once added, all the necessary configurations for my bucket were ready and then all I did was upload the build files. I eventually automated the deployments with CI/CD using GitHub Actions.

3. Access the application
Like I mentioned at the beginning, in its barebone, my website is now ready and if I hit the URL that was generated and displayed at the very end of the Properties tab, my website was up and running –

Bucket website endpoint

And voila! Or is it?

CloudFront

Do you notice something odd with the URL?
CloudFront URL
Yep, that's a strange looking one. Of course, I did not want that. Plus, the connection isn't secure — there's no valid TLS certificate assigned.

That's where CloudFront makes the grand entry. Using CloudFront as a proxy infont of my S3 bucket helped me achieve two things –

  1. It mitigated the issue of an insecure connection with a free TLS certificate.
  2. CloudFront is Amazon's CDN service, and hence it deployed and caching my website at edge locations, optimizing load times for end users.

Let's get started –

  • From the AWS console, I navigated to 'CloudFront' > Clicked Create distribution.
  • For the Origin domain, I selected the bucket endpoint that I created.
  • Since the bucket has been configured for static website hosting, I received a prompt to use the S3 website endpoint instead of the bucket endpoint.

Use website endpoint

  • I switched to the website endpoint.

Website endpoint

  • Under Default cache behavior, I set the Viewer protocol policy to Redirect HTTP to HTTPS.
  • You can enable or disable Web Application Firewall (WAF) based on your requirements but for my website, I did not need it — plus enabling it incurs additional cost.
  • Under Settings, I set the Default root object to index.html.
  • Left all other configurations to its defaults.
  • Created the distribution.

Once the CloudFront distribution was up and ready, I was able use the distribution URL to open my website. The URL looks something like this – https://xxxxxxxxx.cloudfront.net

Notice how the URL is HTTPS-enabled, by default. Much better but still isn't ideal. Don't worry, I'll circle back to this later.

Tracking Visitor Counts with DynamoDB and Lambda

Next, I wanted to track the visitor count to my website and display it in my website. For this, I used –

  • Amazon DynamoDB - Database to store the visitor count data.
  • AWS Lambda - Serverless code that updates and retrieves this data.
  • Amazon API Gateway - Route requests to the Lambda function.

DynamoDB

DynamoDB is Amazon's fully managed, NoSQL database, which stores data in a key-value pair. For my purpose, I'll store the current visitor count and the created date, i.e., the date and time of visit.

I could just make an initial entry and keep updating the count as and when there are new visits, but instead, I'll keep inserting new records each time a user visits my website just so that it acts as a ledger/transaction table.

  • From the AWS console, I navigated to 'DynamoDB'.
  • On the left pane, Tables > Create table.
  • Entered the Table name and the Partition key. Consider this as the primary key.
  • I'm going to name it –
    • Table name – PortfolioVisitorCount
    • Partition key – TotalCount of type Number.
  • I skipped the Sort key, but I wanted a second key (column) — CreatedDate, which will be automatically created on the first database transaction.
  • Left the rest settings to the defaults and Create table.

Next up,

Lambda

AWS Lambda is a serverless computing service from AWS that runs code in response to events or triggers. For my DynamoDB to insert and retrieve records, I needed a Lambda function which would talk to my database.

Here's how I created the Lambda function –

  • Navigated to 'Lambda' in the console.
  • Create function.
  • You can select any of the three options for the source code based upon your convenience –
    • Author from scratch
    • Use a blueprint
    • Container image
  • I chose the 2nd option and selected the below blueprint – Lambda blueprint
  • Provided the Function name.
  • For the Execution role, I chose — Create a new role from AWS policy templates.
  • Entered the Role name and selected the Policy templateSimple microservice permissions. Policy template
  • We'll be provided with the default source code for the basic CRUD operation.
  • You can edit the source code to fit your requirement. For the purpose of this blog, the actual source code is out of context.
  • I removed the API Gateway trigger configuration for now — I'll come back to it at a later stage.
  • And Create function.

Once the Lambda function was created, I navigated to the function. Under the Code tab, I see my function source code. I can modify the code as per requirement and test it.

Alternatively, you can migrate this code to a function in your local machine and instead upload the code from the top-right corner –

Upload code

API Gateway

Now that I had my Lambda function in place, I needed a way to trigger it. While I could create its Function URL and expose it over the internet, it is generally not a recommended approach, due to its lack of basic features like rate limiting, authentication and authorization and even security.

I used Amazon API Gateway for this purpose. This acts as a "front door" to my Lambda function. This way, in the near future, if I've more Lambda functions, this API Gateway will connect them all.

Let's get started...

  • Navigated to API Gateway from the console > Create API.
  • I used REST API > Build. REST API
  • Selected New API, provided the API name (Description is optional).
  • Chose Regional as the API endpoint type.
  • Create API.
  • Once created > Create resource on the left-pane. Create resource
  • In the Create resource page –
    • Set the Resource path and Resource name.
    • Enabled CORS. Create resource page
    • Created the resource.
  • Once the resource was created, I went to the newly-created resource on the left-pane, then Create method on the right-pane.

Create method

  • In the Create method page –

    • Chose the Method type as any.
    • Chose the Integration type as Lambda function.
    • Enabled Lambda proxy integration.

      But why the proxy integration?

    In layman's terms — proxy integration makes the API Gateway act as a proxy passing on the entire HTTP request to the Lambda function as-is (method, headers, query parameters, body, etc.). That way, my Lambda function has more flexibility and control over the request and response header, and now it is the Lambda function's responsibility to process the request and return a properly formatted response.

    • Provided the Lambda function.
    • Left other settings to its defaults.
    • Create method. Create method page

That completed my API Gateway setup. But not just yet!

It's just the setup — I still had to deploy it.

But why the extra step of deploying? Where was this step with all the previous AWS resources?

Amazon API Gateway differentiates design from implementation. Since, it has a lot of moving parts, it supports the concept of stages (dev, test, prod, etc.). This is done so that live systems are not affected while I test my current API changes, before deploying it.

Let's deploy the API Gateway –

  • Clicked Deploy API on the top-right corner.
    Deploy API

  • Selected New stage for Stage.

  • Provided prod for the Stage name.

  • Create stage.

Now that I've deployed my API to the newly created stage, I can see that under Stages on the left pane. The Invoke URL (API endpoint) to be consumed is mentioned here –
Invoke URL

To make sure that my API Gateway was connected to my Lambda function, I tested the Lambda integration –

  • Navigated to Resources.
  • Selected the method.
  • On the right-pane, clicked the Test tab.
  • Filled in the details > Test. Test Lambda

As an additional step, I tested the API Gateway itself via triggering its Invoke URL from Postman.

Next thing — I consumed this endpoint from my React codebase, but like I said, for the purpose of this blog, this is out of context but by now, my website has been integrated with the API gateway.

Right — now that I've all the pieces assembled together and have a perfectly running website, it was time for me to share it online and with friends & family.

So, I shared my website URL — https://d45uiv9deg2fhr.cloudfront.net

Only to realize that it was gibberish, let alone professional — not one person would bother remembering this URL.

And so, I wanted a custom domain for my website. You can purchase it from Amazon Route53 or any other 3rd-party registrar, but I had purchased mine from namecheap.com.

Why not from Amazon Route53?

Remember when I mentioned in the Introduction that I was "transferring" my portfolio website from Azure to AWS? Yeah, I had purchased my domain during the time my website was hosted on Azure, and hence I did not bother transferring it to Route53.

Now that I already had a domain purchased, I wanted a public certificate from AWS Certificate Manager. This was required to obtain a publicly trusted SSL/TLS certificate for securing my website.

In case you're wondering —

But you already have a SSL certificate from CloudFront.

Yes, only if you use the CloudFront-generated URL. If you plan to use a custom domain, you need to request a SSL/TLS certificate separately.

Certificate Manager

Now that I had to request a certificate, the first important step I did was change my region to us-east-1 (N.Virginia)?

Why?

Since CloudFront is a global service, it is not tied to a particular region, unlike other AWS resources. For this, it requires certificates to be issued in the us-east-1 region, and it uses certificates issued in this region to distribute content securely across all edge locations worldwide. Below is a resource from AWS that can help –
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cnames-and-https-requirements.html

Once set, I requested a public certificate –

  • AWS console > Certificate Manager > Request.
  • Selected the Request a public certificate option.
  • In the Request public certificate page –
    • Entered my domain and sub-domain names.
    • Set Validation method as DNS validation.
    • Left the rest settings to defaults. Request certificate
    • Clicked on Request.
  • I had the certificate requested. However, it was not validated. I followed the steps for my domain registrar (namecheap.com) on how to validate the certificate.
  • In essence, I added two records in the Advanced DNS section of my domain registrar with the generated CNAME name and CNAME value in AWS Certificate Manager –

    CNAME data

  • Once added, ACM triggered a validation, with a successful validation looking like this –

    ACM successful validation

Once validated, I went....

Back to CloudFront...

  • And to my distribution.
  • Under the General tab > Settings card > Edit Update CloudFront domain
  • Under the Edit settings page –
    • Added the Alternate domain name.
    • Chose the Custom SSL certificate which I just created.
    • Selected TLSv1.2_2021 as the Security policy. CloudFront edit fields
    • Save changes.

And there you have it — a secure website with our own custom domain.

Implementing File Downloads

Since the website represents my portfolio, I also implemented the AWS infrastructure to support my resume downloads.

The resources used for this are –

  1. S3 bucket - Holds the resume file.
  2. AWS Lambda - Serverless code execution that retrieves and downloads the file.
  3. Existing Amazon API Gateway - Route requests to my Lambda function.

I'll be concise here but for the S3 bucket, the approach for this was pretty much the same as with the previous S3 bucket –

  • Except that I kept the Static website hosting disabled.
  • For the bucket policy, I granted it read access but only from the Lambda function.

As for the Lambda function, the source code gets the file from the S3 bucket and passes it on to the existing API Gateway which, in turn, sends back the response to the client.

AWS Architecture

A picture speaks louder than words.

Let's sum up what I did so far with an architectural diagram –

Architecture diagram

Terraform: Automating AWS Infrastructure

When I started out to build the AWS ecosystem, there were numerous occasions where I had to delete and recreate the AWS resources owing to various factors — sometimes, it was just to try out a different approach — other times, it was due to some configuration mistake which I wanted to rectify by recreating those resources.

Manually deleting and recreating those resources was quite a cumbersome process, and more often than not, people just give up thinking of the vast amount of time they would waste in those efforts.

A better approach to create and manage these infrastructure would be via Infrastructure-as-Code (IaC), where we write (or configure) the AWS resources as code. That way, we have the code ready for all the AWS resources, and the resources creation (or deletion) is just a script away.

While AWS provides its own IaC tool — AWS CloudFormation, it is only limited to AWS.

A popular option in this space is Terraform which can be used across multiple cloud providers. I used Terraform's HCL (HashiCorp Configuration Language) to configure my AWS resources.

This way, I was able to test the AWS resources and its integration multiple times using various configuration without having to manually hand-craft the resources from the console.

Let's reserve the details for this in another blog.

Final Thoughts

Hosting a website on AWS might seem daunting at first, but breaking it down into smaller steps makes it manageable. By leveraging S3, CloudFront, DynamoDB, Lambda, and API Gateway, you can build a scalable, secure, and performant web application.

If you’re just starting with AWS, I highly recommend getting hands-on experience by building projects like this. It’s the best way to solidify your knowledge and stand out in the job market.

Drop me a 'Hi'

Have questions about my approach to AWS solutions?
Let’s connect on LinkedIn — I’d love to hear from you!

Explore My Coding Space...
Check out my projects and contributions on GitHub.

Visit Me...
Feel free to explore more about me on my website.

Happy Learning. Cheers!

Billboard image

Deploy and scale your apps on AWS and GCP with a world class developer experience

Coherence makes it easy to set up and maintain cloud infrastructure. Harness the extensibility, compliance and cost efficiency of the cloud.

Learn more

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs