DEV Community

Cover image for Portfolio/Resume Serverless Website (Cloud Resume Challenge)
Thakur Rishabh Singh
Thakur Rishabh Singh

Posted on

Portfolio/Resume Serverless Website (Cloud Resume Challenge)

This post is about building a serverless website for your resume and hosting it on AWS.

Disclaimer: If you want to finish the challenge on your own DO NOT read this post. It'll spoil the fun ;)

About the challenge

This website has been done as part of the cloud resume challenge by @forrestbrazeal .

Certification

This is a requirement of the challenge and it was easy for me as I already obtained the AWS Certified Solutions Architect Associate Certification in December 2020 and pursued this challenge to gain hands on experience on the AWS cloud.

AWS Solutions Architect Associate

Contents:

  1. Why serverless?
  2. Website Architecture
  3. Choosing a source control
  4. Choosing a CI/CD mechanism
  5. Infrastructure as code with terraform
  6. CI/CD for terraform
  7. Implementing backend with python
  8. CI/CD for backend
  9. Building the frontend
  10. CI/CD for frontend
  11. Conclusion

1. Why Serverless?

The primary reasons are:
1.Cost
2.Scalability
3.Availability
4.Performance

The serverless paradigm offers a way to pay only for what you use. Moreover, the developer need not provision anything beforehand, amazon takes care of that. With low costs it still offers scalability to varying workloads on demand. This makes it very suitable to experiment with cloud technologies out of an enterprise organization and also meet real world challenges. I literally pay 0$ a month for the entire website. My only cost is a custom domain (1.5$/year) and hosted zone on route53 (0.5$/month).

The only downside of it is the infrastructure gets increasingly complex and becomes difficult to manage. However, it is still growing and the future may hold something promising. There is also a possibility that it may succumb to the containerization paradigm (highly debatable).

2. Website Architecture

Screenshot from 2021-03-20 17-13-17

The above architecture works as follows:

  1. Users request the webpages to the browser.
  2. The browser sends request to Route53 which resolves the DNS to reach nearest cloudfront edge location.
  3. CloudFront forwards the request to S3 bucket which contains the website files and retrieves its contents.
  4. The S3 bucket with frontend code is protected with Origin Access Identity (OAI) which prevents direct access to the bucket.
  5. The javascript code in the frontend sends GET and POST requests to API Gateway to retrieve the number of visitors stored in the database.
  6. The API Gateway forwards the request to lambda as JSON.
  7. Lambda identifies the type of request and performs a get/put operation on DynamoDB to store/retrieve the number of visitors.
  8. The visitor count is then displayed on the website.
  9. A git repository on Github provides code version control and CI/CD through Github actions.
  10. Terraform deploys the AWS infrastructure.

3. Choosing a source control

The version control system used for this project is github. The development workflow consists of two branches master and dev.

4. Choosing a CI/CD mechanism

The continuous integration and continuous delivery (CI/CD) is achieved using github actions. It consists of yml files which are used to automate the build, test and deploy phases of the development process.

5. Infrastructure as code with Terraform

The infrastructure required to host the website on AWS is built using Terraform.

Before getting started it is important to take care of the following aspects:

  1. Security: Terraform needs permission to deploy infrastructure on aws. Therefore, a user is created who has access only to STS (Secure Token Service) and nothing else. A role is created which has the IAM permissions to perform actions on the required AWS services.
  2. Terraform State: A remote backend is configured using an s3 bucket to store the Terraform state file and dynamodb is used to store the state lock. This ensures that the infrastructure declared in the tf files and the actual infrastructure deployed is always the same.

The following infrastructure components are deployed using Terraform:

  1. Private S3 bucket which hosts the frontend code of website.
  2. A new table is created in dynamodb with on demand capacity and a primary key ID. A default item is created to store the value of the number of visitors.
  3. A lambda function with python runtime. The code is retrieved from s3 bucket.
  4. An API Gateway configured as lambda proxy is created which sends GET and POST requests to lambda in the form of JSON and the lambda function also responds in JSON.

6. CI/CD for terraform

The Terraform code must be pushed to dev. Creating a pull request results in the triggering of a github action which generates a plan and posts a comment as shown below

Terraform plan triggered by creating a pull request
Screenshot from 2021-03-20 18-14-40

Terraform plan displayed as a comment
Screenshot from 2021-03-20 18-18-59
Screenshot from 2021-03-20 18-19-26

Merging the pull request to master applies the plan and deploys the infrastructure as shown below.

Terraform apply triggered by merging of pull request
Screenshot from 2021-03-20 18-07-30

7. Implementing backend with python

The backend code is written in python. This code is deployed to lambda through CI/CD. The backend code uses Boto3 SDK to communicate with DynamoDB. The Lambda function has an IAM Role to perform actions on DynamoDB.

Dealing with CORS: The lambda response contains the header Access-Control-Allow-Origin for * to allow cross origin requests for GET, POST and OPTIONS requests.

8. CI/CD for backend

A github action is configured to do the following when code is pushed to /backend on the master branch:

  1. Run tests on the python code to report bugs.
  2. Zip the python code
  3. Upload the zip file to s3 bucket
  4. Update the lambda function code by retrieving the zip from the S3 bucket.

This process is shown below:

CI/CD for backEnd
Screenshot from 2021-03-20 18-07-30

9. Building the frontend

The frontend code is built using HTML, bootstrap and Javascript. This code resides on a private S3 bucket acting as origin to cloudfront.

10. CI/CD for frontend

A github action is configured to do the following when code is pushed to /frontend on the master branch:

  1. Upload the code to S3 bucket.
  2. Invalidate the cloudfront cache to get the latest contents from the bucket.

This process is shown below:

CI/CD for frontend
Screenshot from 2021-03-20 18-06-06

11. Conclusion

It has been an amazing experience completing this challenge. I have learnt a ton from Terraform to CI/CD to AWS. It is definitely worth it for anyone who is serious about a career and cloud. Thanks to @forrestbrazeal for this amazing challenge. As for me I'm on to the next challenge, Stay tuned for another Blog post on a serverless project to automate an ETL pipeline on AWS using python.

Top comments (2)

Collapse
 
loujaybee profile image
Lou (πŸš€ Open Up The Cloud ☁️)

Ah, this is awesome Thakur! It's also a really nice write-up, with lots of detail. That's also a really nice architecture diagram you put together, too.

How did you find setting up Terraform for the challenge? It can be a bit fiddly to use TF for lambda I think. Any reason you didn't go with AWS SAM? Also, how long did it take you in the end?

Great job nonetheless, congrats!! πŸ₯³

Collapse
 
thakurrishabh profile image
Thakur Rishabh Singh

Hi Lou!
Thanks for the appreciation. :)
1) Setting up Terraform was messy at first. It required me to think through many things such as STS auth for plan and deploy, remote backend for tf state, problems with their docs!, etc.
2) Lambda was easy actually. The API Gateway was challenging as their docs are incorrect for the API Gateway configured as proxy.
3) SAM is for serverless IaC. But my aim was to use a generic approach. Terraform is valid for multiple clouds so I thought I would learn a lot from that.
4) It took me 30 hours to finish it averaging 6 hours a week for 5 weeks.