DEV Community

Cover image for How I Deployed a Three-Tier Book Review App on AWS Using Terraform Modules and Agentic AI
Ebelechukwu Lucy Okafor
Ebelechukwu Lucy Okafor

Posted on

How I Deployed a Three-Tier Book Review App on AWS Using Terraform Modules and Agentic AI

A complete walkthrough from VPC design to a live Next.js + Node.js + MySQL application running on EC2 and RDS, with every error documented and fixed.

When I started this assignment, I had already deployed applications on both Azure and AWS in previous weeks. But this was different. This time, there was no step-by-step guide. The assignment said: "Design and execute this project like a professional."

So that is exactly what I did. I used Terraform modules to organise my infrastructure, Claude Code as my Agentic AI copilot to generate templates and fix errors, and deployed a real full-stack Book Review application, Next.js frontend, Node.js backend, and MySQL database on Amazon RDS.
"Agentic DevOps is not about replacing engineers with AI. It is about changing what engineers spend their time on. The agent executes. The engineer decides."
This post documents everything: the architecture decisions, every command, every error I hit, and every fix that worked. If you follow this guide, you can deploy the same application with zero surprises.
****What We Are Building

A three-tier web application following production architecture patterns:

Internet Users │ ▼ port 80 EC2 Ubuntu 22.04 ← Public Subnet 10.0.1.0/24 ├── Nginx (port 80) → proxies to Next.js (port 3000) ├── Next.js 15 Frontend ← PM2 managed, port 3000 └── Node.js Backend ← PM2 managed, port 3001 │ ▼ port 3306 (MySQL) RDS MySQL 8.0 ← Private Subnets — no internet access ├── bookreview-db (primary) └── Security: only EC2 can connect on port 3306 Terraform Modules: networking/ → VPC + 3 subnets + IGW + NAT security/ → EC2 SG + RDS SG compute/ → EC2 t2.micro + Elastic IP database/ → RDS MySQL db.t3.micro

****Prerequisites

Install these on your local machine before starting:

Check all tools are ready

terraform -v # Need >= 1.5
aws sts get-caller-identity # Your AWS account
git --version # Any recent version

Terraform >= 1.5 — terraform.io/downloads
AWS CLI configured with aws configure
AWS account with EC2, RDS, and VPC permissions
Git and VS Code
SSH client (built into Mac/Linux; Git Bash on Windows)

****Why Terraform Modules?

****Key Concept: Terraform Modules

Modules are reusable, self-contained packages of Terraform code, instead of one giant main.tf with 300+ lines. Modules separate concerns. Each module has its own inputs (variables.tf), resources (main.tf), and outputs (outputs.tf). The root main.tf calls all modules and wires their outputs together.
For this deployment, I created five modules:
modules/networking/
VPC (10.0.0.0/16), public subnet, 2 private subnets, Internet Gateway, NAT Gateway, route tables
variables.tfmain.tfoutputs.tf
modules/security/
EC2 security group (SSH + HTTP + port 3000) and RDS security group (MySQL from EC2 SG only)
variables.tfmain.tfoutputs.tf
modules/compute/
EC2 t2.micro Ubuntu 22.04, Elastic IP, key pair attachment, user_data for setup
variables.tfmain.tfoutputs.tf
modules/database/
RDS MySQL 8.0 db.t3.micro in private subnets, DB subnet group, password config
variables.tfmain.tfoutputs.tf
The key insight is how modules talk to each other through outputs. The networking module outputs vpc_id. The security module takes that as input. The compute module takes the security group ID from security. No module needs to know the internal details of another, only their outputs.
The Deployment Phase by Phase

****PHASE 01 Project Setup and SSH Key Generation

Create the project folder, module structure, and SSH key pair. The key pair must exist before running terraform validate. Terraform reads the public key file during validation.

Create project and module folders

mkdir terraform-book-review-CLI
cd terraform-book-review-CLI
mkdir -p modules/networking modules/security
mkdir -p modules/compute modules/database user_data

Generate SSH key pair (MUST do this before terraform validate)

ssh-keygen -t rsa -b 4096 -f bookreview-key -N ""

Creates: bookreview-key (private) + bookreview-key.pub (public)

Protect sensitive files from git

echo "bookreview-key" >> .gitignore
echo "*.tfstate" >> .gitignore
echo ".terraform/" >> .gitignore

****PHASE 02 Write All Terraform Module Files

This is where I used Claude Code as an Agentic AI copilot. I used /init to create CLAUDE.md, then asked Claude Code to generate the Terraform templates. When validation errors appeared, I pasted them into Claude Code, and it read all module files simultaneously, diagnosed the root cause, and applied fixes automatically. What would take 30 minutes manually took 7 minutes with Agentic AI.
Here is the root main.tf that calls all modules. Notice how module outputs become inputs to the next module:
terraform {
required_providers {
aws = { source = "hashicorp/aws", version = "~> 5.0" }
}
}

provider "aws" { region = var.aws_region }

Upload SSH public key to AWS

resource "aws_key_pair" "main" {
key_name = "bookreview-key"
public_key = file("${path.module}/bookreview-key.pub")
}

Step 1: Create network foundation

module "networking" {
source = "./modules/networking"
vpc_cidr = var.vpc_cidr
azs = var.azs
}

Step 2: Create security groups (needs VPC ID from networking)

module "security" {
source = "./modules/security"
vpc_id = module.networking.vpc_id # ← output from networking
}

Step 3: Create EC2 (needs subnet + SG from above modules)

module "compute" {
source = "./modules/compute"
subnet_id = module.networking.public_subnet_id
sg_id = module.security.ec2_sg
key_name = aws_key_pair.main.key_name
}

Step 4: Create RDS (needs private subnets + RDS SG)

module "database" {
source = "./modules/database"
private_subnet_ids = [module.networking.private_subnet_id_1,
module.networking.private_subnet_id_2]
rds_sg_id = module.security.rds_sg
db_password = var.db_password
}

****PHASE 03 Terraform Init, Validate, Plan and Apply

With all module files written, the standard Terraform workflow runs. RDS takes 5–8 minutes to provision. Note your EC2 public IP and RDS endpoint from the outputs you need them for all SSH and database commands.
terraform init # Downloads AWS provider ~5.0
terraform validate # Checks all module references
terraform plan # Preview — confirm no destroy actions
terraform apply # Type 'yes' — wait ~10 minutes

After apply completes, note these outputs:

ec2_public_ip = "54.227.244.250"
rds_endpoint = "bookreview-db.cixq88qmex1t.us-east-1.rds.amazonaws.com:3306"

****PHASE 04 SSH Into EC2 and Install Node.js 18
Ubuntu 22.04 ships with Node.js v12. The app requires v18. This is the most common trap when deploying Next.js on a fresh Ubuntu server: you must install Node.js 18 via the NodeSource repository, not the default apt package.

From local terminal

chmod 400 bookreview-key
ssh -i bookreview-key ubuntu@54.227.244.250

Inside EC2 — remove old Node conflict and install v18

sudo apt remove -y libnode-dev
sudo apt autoremove -y && sudo apt clean
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt install -y nodejs nginx git

Verify

node -v # v18.20.8 ✅
npm -v # 10.8.2 ✅

****PHASE 05: Clone App and Configure Backend

The repository is a monorepo with backend/ (Node.js + Express + Sequelize) and frontend/ (Next.js 15). The backend .env file points to localhost by default. Update it to point to the RDS endpoint. Sequelize auto-creates all database tables on first startup.
git clone https://github.com/pravinmishraaws/book-review-app ~/book-review-app
cd ~/book-review-app/backend

Update .env with RDS credentials (use nano — NOT cat heredoc)

nano .env

Replace content with:

DB_HOST=bookreview-db.cixq88qmex1t.us-east-1.rds.amazonaws.com
DB_NAME=bookreviewdb
DB_USER=admin
DB_PASS=BookReview123
DB_DIALECT=mysql
PORT=3001
JWT_SECRET=mysecretkey

Ctrl+X then Y then Enter to save

Install PM2 globally and start backend

sudo npm install -g pm2
pm2 start src/server.js --name backend
pm2 save

Watch for this in logs:

Database 'bookreviewdb' connected successfully with SSL!
Database schema updated successfully!
Server running on port 3001

****Key Concept: Sequelize Auto-Sync
Sequelize calls sync() on startup, which creates all database tables automatically if they don't exist. You don't need to run SQL migration files manually. When you see "Database schema updated successfully", all tables are ready.

****PHASE 06 Build and Deploy Frontend + Configure Nginx

Next.js 15 requires a production build before starting. The build compiles all pages, optimises JavaScript, and pre-renders static pages. Nginx then acts as a reverse proxy; users hit port 80, and Nginx forwards to port 3000.
cd ~/book-review-app/frontend

Tell frontend where the backend API is

cat > .env.local << 'EOF'
NEXT_PUBLIC_API_URL=http://54.227.244.250:3001
EOF

npm install # Installs 327 packages
npm run build # Compiles production bundle

Expected: ✓ Compiled successfully — 7 pages

pm2 start npm --name frontend -- start
pm2 save
pm2 list # backend + frontend both online ✅

Configure Nginx reverse proxy

sudo tee /etc/nginx/sites-available/bookreview << 'EOF'
server {
listen 80;
server_name _;
location /api {
proxy_pass http://localhost:3001;
proxy_set_header Host $host;
}
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
}
}
EOF

sudo ln -sf /etc/nginx/sites-available/bookreview /etc/nginx/sites-enabled/default
sudo nginx -t && sudo systemctl restart nginx

Make PM2 survive reboots

pm2 startup # Run the command it outputs
pm2 save

PHASE 07 Verify — App is Live!

Open your browser and navigate to your EC2 public IP. The Book Review App homepage loads. Login and register pages work. The backend is connected to RDS MySQL. The full stack is live.
http://54.227.244.250:3000 → Book Review App homepage ✅
http://54.227.244.250:3000/login → Login page with form ✅
http://54.227.244.250:3000/register → Register page ✅
http://54.227.244.250 → Same app via Nginx port 80 ✅

****Every Error I Hit and How I Fixed It

These are all the errors I encountered during the deployment. If you follow this guide, you will hit the same ones; these fixes resolve them completely.

Error
Root Cause
Fix
Status
1
Duplicate resource aws_db_subnet_group in variables.tf
Resource block accidentally placed in variables.tf instead of main.tf
Removed resource block from variables.tf — kept only variable declarations
FIXED
2
Reference to undeclared resource aws_subnet.private_1
Subnet resource named private but outputs.tf referenced it as private_1
Renamed resource to private_1 and added private_2 in second AZ
FIXED
3
Unsupported attribute vpc_id on module.networking
networking/outputs.tf did not export vpc_id — module created VPC but never exported its ID
Added output "vpc_id" { value = aws_vpc.main.id } to networking/outputs.tf
FIXED
4
CIDR block 10.0.2.0/24 conflicts with existing subnet
Partial apply had already created a subnet — re-apply tried to create same CIDR twice
Changed private_1 subnet CIDR to 10.0.4.0/24 to avoid conflict
FIXED
5
npm ERR! Missing script: "start"
backend/package.json had no start script — npm start had nothing to run
Added "start": "node src/server.js" to scripts in package.json
FIXED
6
dpkg error overwriting /usr/include/node/common.gypi
Old libnode-dev package conflicted with Node.js 18 installer
sudo apt remove -y libnode-dev then re-run NodeSource installer
FIXED
7
Shell stuck at > prompt when using cat heredoc
Leading spaces before EOF marker prevented heredoc from closing
Use nano instead of cat heredoc for all .env file creation
FIXED

*Key Concepts Explained
*
Three-Tier Architecture
The web tier (EC2 with Nginx + Next.js) is publicly accessible. The app tier (Node.js backend) runs on the same EC2 but is only reachable through the web tier. The database tier (RDS) is in private subnets with no internet route; only the EC2 security group can connect on port 3306. Each tier has a different trust level and a different security boundary.
Security Group Chaining
The RDS security group allows MySQL (port 3306) only from the EC2 security group ID, not from any IP address. This means even with valid RDS credentials, you cannot connect from outside the VPC. Only the EC2 instance can reach the database. This is least-privilege networking applied at the AWS infrastructure level.
Agentic AI in DevOps — Claude Code
I used Claude Code throughout this deployment. For Terraform errors, I would type the error message, and Claude Code would read all module files simultaneously, trace the dependency chain, find the root cause, and apply the fix. This is the future of DevOps AI agents that can hold the entire codebase in context and diagnose cross-file issues instantly.
PM2 — Production Process Manager
If you start a Node.js app with node server.js and close your SSH session, the process dies. PM2 runs apps as background daemons. pm2 startup generates a system service that starts PM2 on boot. pm2 save persists the process list. pm2 list shows CPU, memory, and uptime for all managed processes.
What I Learned — 4 Lessons That Changed How I Think
Modules are a professional infrastructure. Flat Terraform files work for small projects. Modules scale. The separation of networking, security, and compute into independent modules means any engineer on the team can open one folder and understand exactly one thing. That clarity is worth the extra files.

Agentic AI changes the debugging loop entirely. With Claude Code reading all files simultaneously, what took 30 minutes of manual cross-referencing took 7 minutes. The AI does not replace the engineer; it removes the tedious parts so the engineer can focus on architecture decisions.

Read the files before trying to fix things. Almost every error I hit was answered by reading the relevant config file. The .env pointed to localhost. The package.json had no start script. The outputs.tf was missing vpc_id. The answer was always in the files read first.

Infrastructure and deployment are separate concerns. Terraform provisions where the app runs. npm install, npm run build, and pm2 deploy the app itself. Nginx makes it production-ready. Understanding which layer handles which concern prevents debugging in the wrong place.

****Final Thoughts

This was the most complex assignment in the DevOps micro internship — and the most rewarding. I designed a real three-tier architecture, wrote production-grade Terraform modules, used Agentic AI as a genuine copilot, and got a full-stack application live on AWS.
The Book Review App is live. The infrastructure is reproducible. Every error is documented. That is what production DevOps looks like.
If you have questions, drop them in the comments. If this helped you, share it with someone learning DevOps.

terraform#aws#devops#nextjs#nodejs#mysql#claudecode#agenticai#cloudengineering#womenintech#learninginpublic

Top comments (0)