DEV Community

Udoh Deborah
Udoh Deborah

Posted on

Deploying Your First Server with Terraform: A Beginner’s Guide

Deploying Your First Server with Terraform: A Beginner’s Guide
Today, I stopped clicking around the console and started writing code.

Welcome to Day 3 of my journey with the #30DayTerraformChallenge. Up until now, everything was conceptual. But today, the rubber hit the road: I successfully deployed a real, live virtual server (EC2) on AWS using only Terraform.

It felt like magic. But like any good magic trick, there's logic, syntax, and a few failed attempts behind the scenes. In this post, I will walk you through exactly how I did it, what the code looks like, what the basic commands do, and the errors that almost stopped me.

What We Are Building Today
We aren't just creating a blank virtual machine. The goal of this task is to spin up a basic, accessible Web Server.

To do that, our configuration file needs to describe three key components working together:

The Environment (Provider): Tell Terraform to talk to AWS and which "room" (region) to work in.

The Firewall (Security Group): Tell AWS to open Port 80, allowing regular web traffic (HTTP) from the internet to reach our server.

The Server (EC2 Instance): This is the virtual machine itself. We will select an eligible "Free Tier" instance type (t3.micro) and add a small User Data script to automatically install Apache and launch a "Hello World" webpage.

The Architecture (Visualized)
This diagram shows exactly how my code relates to real AWS resources and how a user (like you!) connects to the web page over the internet.

Internet Gateway: This is the entrance. Web traffic flows from the User's browser to our network.

Security Group: This is the stateful firewall perimeter surrounding the server. My code opens an "Ingress" path for Port 80 (HTTP) to allow incoming requests.

EC2 Instance: The core virtual server running Amazon Linux 2023. This is what we are deploying.

Deconstructing the Code

This is the entire content of my main.tf file. When writing this, I avoided copy-pasting and typed it all out manually, which is crucial for building muscle memory.

##  1. THE PROVIDER BLOCK
# This tells Terraform that our "cloud platform of choice" is AWS and we want to deploy in N. Virginia (us-east-1). Think of this as the initial connection "handshake."

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region = "us-east-1"
}

# 2. THE AMI DATA SOURCE
# I learned early on that AMI IDs from tutorials expire. This block automatically searches AWS to find the current, compatible, and FREE-TIER ELIGIBLE image for my region.

data "aws_ami" "latest_amazon_linux" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["al2023-ami-202*-x86_64"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

# 3. THE SECURITY GROUP RESOURCE
# This is our firewall. It defines what can come "in" (ingress) and what can go "out" (egress). 

resource "aws_security_group" "web_sg" {
  name        = "day3-web-sg"
  description = "Allow HTTP inbound traffic on Port 80"

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"] # Open to ALL IP addresses
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1" # Represents ALL Protocols (TCP, UDP, etc.)
    cidr_blocks = ["0.0.0.0/0"]
  }
}

# 4. THE EC2 INSTANCE RESOURCE
# This is the star of the show. Note how we reference the outputs of other blocks rather than hardcoding.

resource "aws_instance" "web_server" {
  ami                    = data.aws_ami.latest_amazon_linux.id # Links to the dynamic search result
  instance_type          = "t2.micro"                      # Falls under AWS Free Tier
  vpc_security_group_ids = [aws_security_group.web_sg.id] # Links the firewall to the server

  # User Data Script: Runs only once on the first boot
  user_data = <<-EOF
              #!/bin/bash
              dnf update -y
              dnf install -y httpd
              systemctl start httpd
              systemctl enable httpd
              echo "<h1>Terraform Day 3: Server Live!</h1>" > /var/www/html/index.html
              EOF

  tags = {
    Name = "Terraform-Day3-Server"
  }
}

# 5. THE PUBLIC IP OUTPUT
# This prints the final Public IP address in my terminal so I don't have to log into the AWS Console to find it.

output "public_ip" {
  value = aws_instance.web_server.public_ip
}
Enter fullscreen mode Exit fullscreen mode

Running the Workflow

Once my code was ready, I ran the foundational Terraform commands in my VS Code terminal.

1.

terraform init
Enter fullscreen mode Exit fullscreen mode

(The Handshake)
This initializes the directory. It reads your provider block, goes online, downloads the necessary AWS plugin (the provider package), and creates the .terraform directory.

[IMAGE PLACEHOLDER: Insert screenshot of terraform init output here]

2.

terraform plan
Enter fullscreen mode Exit fullscreen mode

(The Dry Run)
This is the most critical step. It compares your desired state (your code) with the current state (an empty AWS account) and tells you exactly what it will do. This is your insurance policy against accidental deletions. My plan confirmed: "Plan: 2 to add, 0 to change, 0 to destroy."

3.

terraform apply 
Enter fullscreen mode Exit fullscreen mode

(The Provisioning)
This executes the plan. After review, I typed yes and hit Enter. For about 30 seconds, I watched my terminal spin, creating the security group and then the server.

[IMAGE PLACEHOLDER: Insert screenshot of "Apply complete!" with Public IP output here]

Finally, the glorious green text: "Apply complete! Resources: 2 added, 0 changed, 0 destroyed."

Success and Challenges (The Part You Came For)
Confirmation:
I copied the public_ip output from my terminal ([your IP here, e.g., 3.84.152.1]) and pasted it into my browser. Success! The page loaded instantly with my custom message.

[IMAGE PLACEHOLDER: Insert screenshot of the webpage loading in your browser]

Challenges:
It wasn't all smooth sailing. I encountered three major errors that initially stopped me.

Permission Error: My initial apply failed with UnauthorizedOperation.

Fix: I forgot that my newly created IAM user had zero permissions. I went to the AWS IAM console and attached the AdministratorAccess policy directly to the user to allow the code to create resources.

Environment Pathing Error: When I first tried to run terraform init, the terminal gave an error that the command wasn't recognized.

Fix: My initial installation script had a small error. I navigated to my desktop and downloaded a pre-configured, corrected zip file. Once unzipped, I used the PowerShell command Set-Location to move into the correct terraform-day3 folder, where the executable was properly pathed.

AMI Compatibility Error: My deployment repeatedly failed with the message InvalidParameterCombination. It stated the instance type was not eligible for Free Tier.

Fix: I was using an AMI ID from a months-old tutorial. That specific image was deprecated and no longer supported the t2.micro. I resolved this definitively by adding the data "aws_ami" block to dynamically fetch the latest compatible Amazon Linux AMI. Lesson learned: Never hardcode AMI IDs.

The Cleanup

To avoid any unexpected AWS charges, I ran the final command:

terraform destroy
Enter fullscreen mode Exit fullscreen mode

After reviewing the plan (which confirmed "2 to destroy"), I typed yes. Within a minute, Terraform wiped the slate clean.

Conclusion and Day 4 Preview

Infrastructure as Code feels like a superpower. Going from a blank text file to a live server without ever touching the console interface is incredibly powerful for consistency, speed, and recovery.

Stay tuned for Day 4, where we will move past single resources and learn how to manage state files and collaborate effectively.

Top comments (0)