By Vivian Chiamaka Okose
Published on dev.to | Hashnode | Medium
Tags: #terraform #azure #devops #iac #beginners #cloud
I come from a background in biochemistry and biotechnology. A year ago, "infrastructure" to me meant lab equipment and sample storage. Today, I just provisioned a fully networked Azure virtual machine using nothing but code -- and destroyed it just as cleanly when I was done.
This is the story of how that happened, including every error I hit along the way.
What Is Terraform and Why Does It Matter?
Before I get into the how, let me explain the what.
Terraform is an Infrastructure as Code (IaC) tool built by HashiCorp. Instead of clicking around in the Azure portal to create resources, you write a configuration file that describes what you want your infrastructure to look like, and Terraform figures out how to make it happen. Every resource, every network setting, every dependency -- all defined in code, all version-controllable, all reproducible.
This matters because clicking around in a cloud console is not scalable. If you need to spin up the same environment ten times across ten different projects, you can not manually recreate it each time without introducing inconsistencies. With Terraform, you write it once and deploy it as many times as you need.
That is the power of Infrastructure as Code.
What I Built
For this assignment, I provisioned a complete virtual machine setup on Microsoft Azure using Terraform. Here is what the final infrastructure looked like:
- Resource Group -- a logical container for all related resources
- Virtual Network (VNet) -- the private network space (10.0.0.0/16)
- Subnet -- a segment carved out of the VNet (10.0.1.0/24)
- Public IP -- a static, externally reachable IP address
- Network Interface Card (NIC) -- the bridge connecting the VM to the network
- Virtual Machine -- Ubuntu 20.04 LTS Gen2 running on Standard_D2ads_v7
Six resources, all provisioned from a single main.tf file.
Setting Up the Environment
I run WSL2 Ubuntu on Windows, so the first step was installing Terraform and the Azure CLI directly in my WSL terminal.
Installing Terraform:
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt-get install terraform -y
terraform -v
# Terraform v1.14.7
Installing and authenticating Azure CLI:
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
az login
az account show
After az login, I authenticated through the browser device flow and confirmed my subscription was active. With that done, Terraform had everything it needed to talk to Azure.
The main.tf File
Here is the complete configuration I ended up with after troubleshooting (more on that shortly):
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.0"
}
}
}
provider "azurerm" {
features {
resource_group {
prevent_deletion_if_contains_resources = false
}
}
}
resource "azurerm_resource_group" "rg" {
name = "terraform-azure-vm-rg"
location = "UK South"
}
resource "azurerm_virtual_network" "vnet" {
name = "terraform-vnet"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "subnet" {
name = "terraform-subnet"
resource_group_name = azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.0.1.0/24"]
}
resource "azurerm_public_ip" "public_ip" {
name = "terraform-public-ip"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
allocation_method = "Static"
sku = "Standard"
}
resource "azurerm_network_interface" "nic" {
name = "terraform-nic"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.subnet.id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.public_ip.id
}
}
resource "azurerm_virtual_machine" "vm" {
name = "terraform-vm"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
network_interface_ids = [azurerm_network_interface.nic.id]
vm_size = "Standard_D2ads_v7"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
os_profile {
computer_name = "terraform-vm"
admin_username = "azureuser"
admin_password = "P@ssw0rd1234!"
}
os_profile_linux_config {
disable_password_authentication = false
}
storage_image_reference {
publisher = "Canonical"
offer = "0001-com-ubuntu-server-focal"
sku = "20_04-lts-gen2"
version = "latest"
}
storage_os_disk {
name = "terraform-os-disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
}
output "public_ip_address" {
description = "The public IP address of the VM"
value = azurerm_public_ip.public_ip.ip_address
}
Notice how resources reference each other using dot notation -- azurerm_resource_group.rg.location instead of hardcoding "UK South" everywhere. This is not just clean code; it means if you change the location in one place, it updates throughout the entire configuration automatically.
The Deployment Flow
terraform init # Download the AzureRM provider plugin
terraform plan # Preview what will be created (dry run)
terraform apply # Actually deploy to Azure
The terraform plan output is one of my favourite things about this tool. Before touching a single resource, it shows you exactly what it intends to create, change, or destroy -- marked with +, ~, or -. You can review and catch mistakes before they cost you money or cause an outage.
The 5 Errors That Actually Taught Me DevOps
Here is where things got real. I did not get a clean deployment on the first try. I got five errors, and each one taught me something important.
Error 1: Basic SKU Public IP Quota
IPv4BasicSkuPublicIpCountLimitReached: Cannot create more than 0 IPv4
Basic SKU public IP addresses for this subscription in this region.
What happened: Azure free-tier subscriptions have a quota of zero Basic SKU public IPs. The fix was adding sku = "Standard" to the public IP resource. One line. Lesson: always check your subscription quotas before deploying.
Error 2: VM Size Capacity Restriction
SkuNotAvailable: The requested VM size Standard_B1s is currently not
available in location 'eastus'.
What happened: The B-series VMs are restricted on free-tier subscriptions. Rather than guessing another size, I queried Azure directly:
az vm list-skus --location uksouth --resource-type virtualMachines --output table | grep "None"
This returned every VM size with no restrictions on my subscription, and I picked Standard_D2ads_v7 -- a small, affordable D-series with AMD processor. Always let the platform tell you what is available rather than guessing.
Error 3: Hypervisor Generation Mismatch
BadRequest: The selected VM size 'Standard_D2ads_v7' cannot boot
Hypervisor Generation '1'.
What happened: Modern VM sizes like D2ads_v7 require Generation 2 images, but Ubuntu 18.04 LTS is a Generation 1 image. Mixing them causes a boot failure at the hypervisor level. The fix was switching to Ubuntu 20.04 LTS Gen2 -- a newer, more secure image that is Gen2 compatible.
Error 4: Platform Image Not Found
PlatformImageNotFound: The platform image
'Canonical:UbuntuServer:20_04-lts-gen2:latest' is not available.
What happened: Azure's image naming is inconsistent across regions. The offer name UbuntuServer is a legacy name that does not include Gen2 images in UK South. I queried the available images directly:
az vm image list --location uksouth --publisher Canonical --offer 0001-com-ubuntu-server-focal --all --output table | grep "gen2"
The correct offer was 0001-com-ubuntu-server-focal with SKU 20_04-lts-gen2. Never assume image names -- always verify for your region.
Error 5: OS Disk Blocking Destroy
Error: deleting Resource Group "terraform-azure-vm-rg": the Resource Group
still contains Resources.
/Microsoft.Compute/disks/terraform-os-disk
What happened: When Azure creates a VM, it automatically provisions an OS disk as a child resource. Since our Terraform configuration did not explicitly manage that disk, it was not tracked in Terraform state -- so Terraform refused to delete the resource group containing it. The fix was adding two flags directly to the VM resource:
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
The Successful Deployment
After all five fixes, the final terraform apply ran cleanly:
Apply complete! Resources: 6 added, 0 changed, 0 destroyed.
Outputs:
public_ip_address = "51.11.128.165"
Verification via Azure CLI:
az vm list -d --query "[].{Name:name, Status:powerState}" --output table
Name Status
------------ ----------
terraform-vm VM running
And then a clean destroy:
terraform destroy
# Destroy complete! Resources: 1 destroyed.
Key Concepts I Now Understand Deeply
Declarative vs Imperative: Terraform is declarative -- you describe the desired end state, not the steps to get there. Terraform computes the steps automatically based on resource dependencies.
Providers: Plugins that teach Terraform how to communicate with a specific cloud platform. The azurerm provider is what lets Terraform understand Azure-specific resources.
State: Terraform maintains a state file that maps your configuration to real-world resources. This is how it knows what exists, what needs to change, and what to destroy.
Resource References: Using azurerm_resource_group.rg.location instead of hardcoded values keeps configurations flexible and consistent across every resource.
What Is Next
This was Assignment 1 of a five-assignment Terraform series. Next up: deploying an EC2 instance on AWS inside a custom VPC with public and private subnets. The networking complexity goes up significantly, and I am here for it.
If you are just starting your DevOps journey, my biggest takeaway from this exercise is this: do not fear the errors. Every error message is documentation. Read it carefully, query the platform for what it actually supports, and fix one thing at a time. That systematic approach is what DevOps engineering is really about.
Follow along as I document this full Terraform journey. I write about DevOps, cloud infrastructure, and what it actually looks like to transition into tech from a completely different background.
GitHub: vivianokose











Top comments (0)