DEV Community

Cover image for Deploying a Web Server on AWS Using Terraform (Beginner-Friendly Guide)
Naomi Ansah
Naomi Ansah

Posted on

Deploying a Web Server on AWS Using Terraform (Beginner-Friendly Guide)

As I started preparing for cloud support roles, I decided to learn Terraform and that’s where things got interesting. I thought deploying a web server would be straightforward, but once I actually started, I realized I didn’t fully understand what I was doing. Things kept breaking.
My SSH key wasn’t working. My user data script failed because of a small file name issue.
And at some point, I was just staring at errors wondering what I missed.Instead of jumping around looking for quick fixes, I slowed down and decided to understand each part properly.This project became my way of learning Terraform, not just using it.

I wasn’t trying to build anything complex.I just wanted to launch a simple EC2 instance and have a web server running on it. The goal was simple:
Install Nginx automatically
Open port 80 so I can access it from my browser
Restrict SSH (port 22) to only my IP

Architecture of the web server I deployed using Terraform (EC2 + Nginx)

I opened it in VS Code and created a new folder for the project in the terminal. Then I created my Terraform files, starting with main.tf. Instead of copying code blindly, I went to the Terraform Registry to look for the correct resource blocks and understand how they work. That was when I realized something: Terraform documentation is powerful but only if you understand how to read it.

Project structure in VS Code showing Terraform configuration files

This was where I got stuck first. I kept seeing “resource” and “data source” in the documentation, but I didn’t fully understand the difference. After working through it, here’s how I now understand it: A resource is used to create something like an EC2 instance. A data source is used to fetch something that already exists like getting the latest Amazon Linux AMI. That small difference made everything easier.

I started writing my configuration by defining my AWS provider and making sure the region matched what I was using in AWS.Then I used a data source to fetch the latest Amazon Linux 2023 AMI instead of hardcoding it. This helped me avoid issues related to region mismatches and outdated AMIs.

Using a Terraform data source to fetch the latest Amazon Linux 2023 AMI

The SSH key pair confused me more than I expected.Terraform uses the public key, while SSH uses the private key. At first, I mixed them up and of course, nothing worked. Once I understood the flow, it made sense:
Public key → used by Terraform
Private key → used when connecting via SSH

SSH key pair generated locally — public key for Terraform, private key for SSH access

I configured the security group to:
Allow HTTP traffic on port 80
Restrict SSH (port 22) to only my IP
This part helped me understand that security is not just about opening ports, but controlling access properly.

Security group configuration allowing HTTP (port 80) and restricting SSH (port 22) to my IP

After putting everything together, I ran:
terraform plan
terraform apply
and this was the moment of truth.

Terraform successfully provisioning infrastructure (EC2 instance, key pair, and security group)

Instead of installing Nginx manually, I used a user data script. This script runs automatically when the instance starts and installs Nginx, starts it, and serves a simple web page.

User data script used to install and start Nginx automatically on instance launch

After Terraform finished, I copied the public IP and opened it in my browser. And finally it worked.

Web server successfully deployed and accessible via public IP

I also connected to the instance using SSH. That was another moment where everything felt real not just code, but an actual running server.

Successful SSH connection to the EC2 instance using the private key

This project was not smooth at all. Small things caused big problems: File naming issues, SSH key confusion, Misunderstanding parts of the documentation. But fixing those issues is where the real learning happened. If you’re just starting out and things feel confusing, you’re not alone. You’re just in the stage where things are starting to make sense. And trust me, that’s where real learning begins.

One issue that really stood out was when Terraform failed because it couldn’t find my user data script.

It turned out to be a simple file naming mistake but it completely stopped the deployment.

Terraform error caused by incorrect file name when referencing user data script

Fixing that helped me understand how strict Terraform is with file paths and references.
It also reminded me that sometimes the smallest mistakes can cause the biggest problems.

To avoid unnecessary AWS charges, run:
terraform destroy
to clean up all resources after learning.

Top comments (0)