DEV Community

Luka Ma
Luka Ma

Posted on

Deploying Serverless Architecture: A Step-by-Step Experience

Cloud Resume Infrastructure
Better resolution of the diagram

Introduction: Embracing the Cloud Resume Challenge

Moving to a DevOps position in my company, I wanted to learn more about cloud technologies and how to automate our infrastructure. Looking for a practical project to better understand and get real experience, I discovered the Cloud Resume Challenge by Forrest Brazeal. This challenge seemed like the perfect chance to explore AWS services more, learn about Infrastructure as Code (IaC) using tools like Terraform, and use DevOps principles in a real project.

Part 0: Certification and Initial Setup

The challenge began with an AWS certification, which I chose to pursue at the end of the project. For anyone preparing for AWS certifications, I recommend Stephane Maarek's courses on Udemy.

Before tackling the practical tasks, I set up a root account and used the AWS Organization Formation tool, designed to efficiently manage AWS Organizations with IaC. I structured the setup into development and production environments and implemented single sign-on (SSO) using aws-sso-util.

Infrastructure as Code for AWS Organizations

Part 1: Frontend Deployment

The first leg of the challenge involved creating a static web page and making it accessible online. I decided for Next.js to develop a simple resume page, which I then hosted on an S3 bucket configured for static website hosting. The domain, purchased via Cloudflare, was linked to a Cloudfront distribution. Though there were initial challenges in configuration, the site was soon live and operational.

Part 2: Backend - API for a Visitor Counter

The challenge's second phase required the implementation of a visitor counter. I created a Lambda function to store each unique visitor's hashed IP address for the day in a DynamoDB table. This setup ensured that only unique visits were counted. An API Gateway was established to trigger the Lambda function upon site visits.

As users visit the resume page, hosted on an S3 bucket, an API Gateway triggers an AWS Lambda function. This function is responsible for distinguishing unique visits. It does this by hashing the IP address of each visitor to maintain user privacy.

To make sure that only unique daily visits are recorded, the Lambda function checks against a DynamoDB table named UniqueVisitors. If the visitor's hashed IP address isn't found, or if it's been over a day since their last visit, the function then increments a count in a second DynamoDB table, VisitorCounter. This table keeps a record of the total unique visits.

If the same user comes back on the same day, the count doesn't change.

Lambda function execution
Better resolution of the Lambda execution

Part 3: Automation with CI/CD

So far, the initial part of the challenge, which involved a bit of coding and setting up through the AWS console, went smoothly and wasn't much of a problem. It was the next part that captured most of my time and interest. I chose Terraform as my Infrastructure as Code (IaC) tool and faced most of the challenges there.

Challenge 1

The question was: how do you proceed when you have everything set up "visually" in the AWS console and need to transform it into code? Fortunately, Terraform offers a solution for this: you can import the state from AWS into your Terraform. However, this requires you first to write some "skeleton" code for a specific module that you want to import and modify it till you get the same state as on AWS.

Structure

For the structure of the Terraform project, I decided on the following approach: place all the modules into a modules directory and create two main files, one for production and one for development. These files will utilise the modules but will be configured with different variables and credentials to create the infrastructure.

Terraform project

Challenge 2

For state management, I decided to use a remote state. Hereโ€™s how it works:

In Terraform, a remote state allows teams to share their infrastructure's state in a secure and efficient manner. When using AWS as the backend, Terraform stores the state file in an S3 bucket, facilitating centralized management and versioning of the infrastructure state. This configuration enhances collaboration since any team member can access the latest state for updates or deployments.

To ensure consistency and prevent conflicts, Terraform employs a locking mechanism using DynamoDB. When a user executes Terraform commands that might modify the state, Terraform initiates a lock in a DynamoDB table. This lock stops others from making simultaneous changes, thus reducing the risk of state corruption. Once the operation concludes, Terraform releases the lock, allowing others to safely make their updates.

This was a problem for me. I wanted to deploy the entire infrastructure from scratch, but for this process to work, Terraform needs an S3 bucket to store the state. This becomes a catch-22 situation because there's nothing deployed yet. One workaround is to manually create an S3 bucket and a DynamoDB table just once. These two resources will not be tracked by Terraform; it will only read data from there.

Challenge 3

Regarding variable management, Terraform knows which values to use through several methods, such as *.tfvars files, environment variables, command-line flags, and direct definitions in *.tf files. For enhanced security and flexibility, I chose to use *.tfvars files, which are dynamically copied to the repository during pipeline executions. This method ensures that sensitive or environment-specific values are not hardcoded into the files under version control. If the *.tfvars file is present in the root directory, Terraform will automatically read the variables from there.

State Copy to Project

Although this process of copying the *.tfvars file to the repo during pipeline execution is straightforward, it does have a drawback. Every time I create a new variable, I need to manually update this file in the S3 bucket, which is also where the state file is stored. However, this shouldnโ€™t be an issue if the infrastructure doesn't change often.

Pipeline Strategies

Two main repositories have been created: one for the frontend and one for the Terraform infrastructure. Backend (API) infrastructure deployment is managed through the pipelines of these repositories.

For the frontend, whenever code is pushed or merged into the main branch, GitHub Actions are triggered. These actions build the application and run tests. If all checks pass, the S3 bucket is updated with the latest build.

For the AWS infrastructure, the goal was to leverage AWS Organizations to manage two separate environments. Before merging anything into the main branch, the process involves recreating the exact same environment in development as in production, running tests, and if all goes well, proceeding with the update or production deployment. So when a Pull Request (PR) is created, GitHub Actions execute the following steps: they create a development environment, conduct tests, and if successful, update the production environment. Once merged, a job is run to destroy the development environment, ensuring that resources are not wasted.

Terraform deployment

Conclusion: Lessons Learned and Resources for Further Learning

Completing the Cloud Resume Challenge was very fulfilling, offering deep insights into AWS, Terraform, and expanding my expertise in cloud computing and infrastructure as code. For those interested in diving deeper into Terraform, Yevgeniy Brikman's blog series on Gruntwork (A Comprehensive Guide to Terraform) and his book are excellent resources.
If you want to checkout the final product here it is: Cloud Resume P.S - I am not a designer ๐Ÿ™ƒ

Top comments (1)

Collapse
 
jvojak profile image
Josip Vojak

This seems like a great summary of what a cloud dev ops should anticipate to do if the company asserts modern practices. After doing everything from scratch and looking at what you've done, is there anything you would change or improve in the process?