DEV Community

Cover image for Building a Three-Tier Bookstore App on AWS from Scratch: Infrastructure, Deployment, and Every Debug Along the Way
Vivian Chiamaka Okose
Vivian Chiamaka Okose

Posted on

Building a Three-Tier Bookstore App on AWS from Scratch: Infrastructure, Deployment, and Every Debug Along the Way

Building a Three-Tier Bookstore App on AWS from Scratch: Infrastructure, Deployment, and Every Debug Along the Way

By Vivian Chiamaka Okose
Tags: #aws #terraform #rds #nodejs #devops #threetier #mysql #nginx #cloud #iac


I recently completed a project that I would describe as the most satisfying and the most frustrating thing I have built so far in my DevOps journey -- in equal measure.

The goal was to deploy The EpicBook, a full-stack bookstore application, on AWS using Terraform. Not just spin up a VM and call it done. A proper three-tier architecture: network layer, compute layer, and a managed database layer, all defined as infrastructure as code, all connected through deliberate security boundaries.

This is the story of how I built it, what broke along the way, and what I now understand about cloud architecture that I did not before.


The Architecture

Before writing a single line of code, I mapped out what needed to exist:

Network tier: A custom VPC with a public subnet for the EC2 instance and a private subnet for the RDS database. Internet Gateway, Route Table, and Route Table Association for public internet access. Two security groups -- one for EC2 and one for RDS -- with a deliberate trust boundary between them.

Compute tier: An EC2 t3.micro instance running Ubuntu 22.04 in the public subnet. Node.js 18 via nvm, Nginx as a reverse proxy, and the EpicBook Node.js application running on port 8080.

Database tier: Amazon RDS MySQL 8.0 in the private subnet. Not publicly accessible. Reachable only from the EC2 security group on port 3306.

Eleven Terraform resources total. One configuration, three files.


Why Three Files Instead of One?

For the first three projects in this series, I kept everything in a single main.tf. That works fine for small deployments. For this project, I split the configuration into three files:

  • variables.tf for all configurable values (region, CIDR blocks, instance types, credentials)
  • main.tf for all resource definitions
  • outputs.tf for what gets printed after deployment

The reason this matters: variables.tf eliminates hardcoded values scattered across resources. If I need to change the AWS region or the database password, I change one line in variables.tf and it propagates everywhere automatically. The sensitive = true flag on the database password means Terraform never prints it in plan output or logs:

variable "db_password" {
  description = "Master password for RDS"
  type        = string
  default     = "EpicBook123!"
  sensitive   = true
}
Enter fullscreen mode Exit fullscreen mode

This is not just organisation. It is the foundation of maintainable infrastructure code.


The Security Design That Matters Most

This is the part of the project I am most proud of from an engineering perspective.

The RDS security group does not allow access from a CIDR block. It allows access specifically from the EC2 security group:

resource "aws_security_group" "rds_sg" {
  name        = "epicbook-rds-sg"
  description = "Allow MySQL from EC2 only"
  vpc_id      = aws_vpc.epicbook_vpc.id

  ingress {
    description     = "MySQL from EC2"
    from_port       = 3306
    to_port         = 3306
    protocol        = "tcp"
    security_groups = [aws_security_group.ec2_sg.id]
  }
}
Enter fullscreen mode Exit fullscreen mode

The difference is significant. A CIDR-based rule allows any resource in that IP range to reach the database. A security group reference allows only resources that belong to that specific security group. Even if another VM appeared in the same subnet with the same IP range, it could not reach the database unless it was explicitly assigned the EC2 security group.

This is zero-trust networking applied at the infrastructure layer. The database is unreachable from the internet, unreachable from other VPC resources, and reachable only from the specific compute layer we defined.


Deployment: What Went Smoothly

The Terraform apply ran cleanly:

Apply complete! Resources: 11 added, 0 changed, 0 destroyed.

Outputs:
ec2_public_ip  = "16.28.105.178"
rds_endpoint   = "epicbook-rds.c9yasg0i2xal.af-south-1.rds.amazonaws.com:3306"
rds_port       = 3306
Enter fullscreen mode Exit fullscreen mode

The RDS instance took about 5 minutes to provision, which is normal. AWS is provisioning a managed database with storage, parameter groups, and subnet associations. That takes time.

The RDS connection test from EC2 worked immediately:

mysql -h epicbook-rds.c9yasg0i2xal.af-south-1.rds.amazonaws.com \
  -u admin -pEpicBook123! -P 3306 epicbook

Welcome to the MySQL monitor.
Server version: 8.0.44
mysql>
Enter fullscreen mode Exit fullscreen mode

That connection working on the first try confirmed the security group design was correct.


Deployment: What Did Not Go Smoothly

Here is where the real learning happened.

Problem 1: The apt Lock

Ubuntu's default apt repository ships Node.js v12, which is too old for the EpicBook application. When I tried to upgrade via NodeSource, the setup script ran apt install internally, which conflicted with a background system update process that had locked the package manager:

E: Could not get lock /var/lib/dpkg/lock-frontend.
It is held by process 24847 (apt)
Enter fullscreen mode Exit fullscreen mode

The fix was to bypass apt entirely using nvm, the Node Version Manager:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.bashrc
nvm install 18
nvm use 18
node -v  # v18.20.8
Enter fullscreen mode Exit fullscreen mode

nvm downloads Node.js directly from the official distribution and installs it in the user's home directory. No package manager. No lock conflicts. No sudo required. This is the correct approach for application servers where you need a specific Node.js version and cannot wait for the package manager.

Problem 2: Hardcoded Database Names in SQL Files

The EpicBook repository includes SQL files for creating tables and seeding data. Those files were written for a database named bookstore, but the RDS database I provisioned is named epicbook. Running the files directly produced errors:

ERROR 1049 (42000): Unknown database 'bookstore'
Enter fullscreen mode Exit fullscreen mode

The fix was sed, a command-line tool for stream editing text:

sed 's/bookstore/epicbook/g' db/BuyTheBook_Schema.sql > db/epicbook_schema.sql
sed 's/bookstore/epicbook/g' db/author_seed.sql > db/epicbook_author_seed.sql
sed 's/bookstore/epicbook/g' db/books_seed.sql > db/epicbook_books_seed.sql
Enter fullscreen mode Exit fullscreen mode

This replaced every occurrence of bookstore with epicbook across all three files in seconds. The result: 53 authors and 54 books loaded cleanly into RDS.

Problem 3: Application Configuration

The config/config.json file shipped with the repository pointed to 127.0.0.1 (localhost). Running the app without updating this would have produced a connection refused error because there is no local MySQL server -- the database is RDS.

The updated configuration:

{
  "development": {
    "username": "admin",
    "password": "EpicBook123!",
    "database": "epicbook",
    "host": "epicbook-rds.c9yasg0i2xal.af-south-1.rds.amazonaws.com",
    "dialect": "mysql",
    "port": 3306
  }
}
Enter fullscreen mode Exit fullscreen mode

Always verify application configuration against infrastructure outputs before starting a service. The outputs.tf file exists precisely for this reason.


The Result

going to html route
App listening on PORT 8080
Enter fullscreen mode Exit fullscreen mode

Visiting http://16.28.105.178 loaded The EpicBook homepage with the full book catalogue from RDS. The cart accepted items, calculated totals, and processed orders end to end.

The Nginx reverse proxy configuration that made this work:

server {
    listen 80;
    server_name _;

    location / {
        proxy_pass http://localhost:8080;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
Enter fullscreen mode Exit fullscreen mode

Port 80 receives public traffic. Nginx forwards it to port 8080 where Node.js is running. The application never needs to run as root or bind to a privileged port.


What I Now Understand Differently

Security groups as identity, not location. Referencing a security group in an ingress rule is fundamentally different from specifying a CIDR block. It grants access based on what a resource is, not where it is. This is more secure and more maintainable.

nvm over apt for Node.js. On production servers, package manager locks, version staleness, and permission issues make apt a poor choice for language runtime management. nvm, pyenv, rbenv -- the language-specific version managers exist for good reasons.

sed for operational configuration fixes. When application configuration files have hardcoded values that do not match your infrastructure, sed is the fast, scriptable, auditable fix. It is a tool every DevOps engineer should be comfortable with.

Outputs.tf is a first-class deliverable. The RDS endpoint, EC2 IP, and port values printed after apply are not just convenient -- they are the interface between your infrastructure layer and your application configuration layer. Design them deliberately.


What Is Next

The final project in this series is the production-grade version: a six-subnet three-tier architecture across two availability zones, public and internal load balancers, RDS with multi-AZ and read replicas. Everything from this project applies -- the security group design, the variable structure, the application deployment patterns -- just at production scale.


I build and document cloud infrastructure projects in public. Follow along for Terraform, AWS, Azure, and real-world DevOps from someone who started in biochemistry and is building toward cloud engineering one deployment at a time.

GitHub: vivianokose

Top comments (0)