Introduction
This project demonstrates how to design and deploy a secure multi-virtual machine environment on Microsoft Azure using Terraform Infrastructure as Code (IaC) principles.
The infrastructure provisions two Linux virtual machines within the same virtual network and subnet, enabling private communication between them without manual configuration inside the operating systems.
By using reusable Terraform modules for networking and compute resources, the deployment follows real-world DevOps practices such as automation, modular design, resource dependency management, and cloud governance through tagging.
The project validates internal connectivity at the infrastructure level using Azure networking diagnostics, proving that both virtual machines can communicate securely over private IP addresses.
Project Objective
The objective of this project is to build a production-style cloud infrastructure that demonstrates practical skills in cloud automation, networking architecture, and Infrastructure as Code.
Specifically, the project aims to:
- Automate Azure resource provisioning using Terraform
- Deploy and configure two Linux virtual machines
- Design a virtual network and subnet for private communication
- Implement network security rules allowing internal ICMP traffic
- Validate VM-to-VM connectivity without logging into the machines
- Apply modular Terraform design for reusable infrastructure
- Demonstrate real-world DevOps and Cloud Engineering practices
Architecture You’ll Build
Both VMs can ping each other because:
- Same subnet
- NSG allows internal traffic
- Private networking
Step 1 Create terraform root project folder
Create the directory and create the provider.tf file inside it (Mkdir terraform-azure-2vm-network/provider.tf)
Mkdir terraform-azure-2vm-network
cd terraform-azure-2vm-network
New-item or touch provider.tf
Copy this configuration code into the provider.tf file
terraform {
required_version = ">= 1.5.0"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 3.100"
}
}
}
provider "azurerm" {
features {}
}
Create the terraform-azure-2vm-network/variables.tf file
New-item or touch variables.tf
Copy and paste this config into the variables.tf files
variable "location" {
default = "East US"
}
variable "rg_name" {
default = "rg-2vm-network"
}
variable "admin_username" {
default = "azureuser"
}
variable "admin_password" {
sensitive = true
}
variable "vm_size" {
default = "Standard_B1s"
}
Create the terraform-azure-2vm-network/terraform.tfvars file
New-item or touch terraform.tfvars
Copy and paste this config into the terraform.tfvars files
admin_password = "P@ssword12345!"
Create the terraform-azure-2vm-network/main.tf file
New-item or touch main.tf
Copy and paste this config into the main.tf files
# Resource Group
resource "azurerm_resource_group" "rg" {
name = var.rg_name
location = var.location
}
# Network Module
module "network" {
source = "./modules/network"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
}
# VM 1
module "vm1" {
source = "./modules/linux_vm"
vm_name = "vm-1"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
subnet_id = module.network.subnet_id
nsg_id = module.network.nsg_id
admin_username = var.admin_username
admin_password = var.admin_password
vm_size = var.vm_size
}
# VM 2
module "vm2" {
source = "./modules/linux_vm"
vm_name = "vm-2"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
subnet_id = module.network.subnet_id
nsg_id = module.network.nsg_id
admin_username = var.admin_username
admin_password = var.admin_password
vm_size = var.vm_size
}
Create the terraform-azure-2vm-network/outputs.tf file
New-item or touch outputs.tf
Copy and paste this config into the outputs.tf files
output "vm1_private_ip" {
value = module.vm1.private_ip
}
output "vm2_private_ip" {
value = module.vm2.private_ip
}
output "ping_instruction" {
value = "VMs are in same subnet — they can ping using private IPs"
}
Step 2 Create terraform Network Modules **
Create the parent folder **Modules and Create the sub-folder Network and Create the file Variable.tf inside the Network folder.
mkdir Modules
cd Modules
mkdir Network
cd Network
New-item variables.tf
Copy this config into the modules/network/variables.tf
variable "location" {}
variable "resource_group_name" {}
Create the modules/network/main.tf file
New-item or touch main.tf
Copy this config into the main.tf file
resource "azurerm_virtual_network" "vnet" {
name = "vnet-2vm"
address_space = ["10.0.0.0/16"]
location = var.location
resource_group_name = var.resource_group_name
}
resource "azurerm_subnet" "subnet" {
name = "subnet-2vm"
resource_group_name = var.resource_group_name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.0.1.0/24"]
}
# Network Security Group allowing internal ping
resource "azurerm_network_security_group" "nsg" {
name = "nsg-2vm"
location = var.location
resource_group_name = var.resource_group_name
}
resource "azurerm_network_security_rule" "allow_icmp" {
name = "Allow-Internal-ICMP"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Icmp"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "VirtualNetwork"
destination_address_prefix = "VirtualNetwork"
resource_group_name = var.resource_group_name
network_security_group_name = azurerm_network_security_group.nsg.name
}
resource "azurerm_network_security_rule" "allow_ssh" {
name = "allow-ssh"
priority = 200
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
resource_group_name = var.resource_group_name
network_security_group_name = azurerm_network_security_group.nsg.name
}
Create the modules/network/outputs.tf file.
New-item or touch outputs.tf
Copy this Config into the outputs.tf file
output "subnet_id" {
value = azurerm_subnet.subnet.id
}
output "nsg_id" {
value = azurerm_network_security_group.nsg.id
}
Step 3 Create the LINUX VM Module.
Create the sub folder directory linux_vm/variables.tf inside the parent folder Modules.
mkdir linux_vm
cd linux_vm
New-item variables.tf or touch variables.tf
Copy the config into the variables.tf files
variable "vm_name" {}
variable "location" {}
variable "resource_group_name" {}
variable "subnet_id" {}
variable "nsg_id" {}
variable "admin_username" {}
variable "admin_password" {}
variable "vm_size" {}
Create the file modules/linux_vm/main.tf and copy the config into the file.
New-item main.tf
resource "azurerm_network_interface" "nic" {
name = "${var.vm_name}-nic"
location = var.location
resource_group_name = var.resource_group_name
ip_configuration {
name = "internal"
subnet_id = var.subnet_id
private_ip_address_allocation = "Dynamic"
public_ip_address_id = azurerm_public_ip.public_ip.id
}
}
resource "azurerm_network_interface_security_group_association" "assoc" {
network_interface_id = azurerm_network_interface.nic.id
network_security_group_id = var.nsg_id
}
resource "azurerm_linux_virtual_machine" "vm" {
name = var.vm_name
resource_group_name = var.resource_group_name
location = var.location
size = var.vm_size
admin_username = var.admin_username
admin_password = var.admin_password
disable_password_authentication = false
network_interface_ids = [
azurerm_network_interface.nic.id
]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "0001-com-ubuntu-server-jammy"
sku = "22_04-lts"
version = "latest"
}
}
Create the file modules/linux_vm/outputs.tf and copy the config into the outputs.tf file.
New-item or touch outputs.tf
output "private_ip" {
value = azurerm_network_interface.nic.private_ip_address
}
Step 4 Run terraform
Inside project root:
Login into azure az login
terraform init
terraform plan
terraform apply
You will be prompted
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
*Step 5 Get Ping vm2 from vm1 to confirm communication
use the command
az vm run-command invoke --resource-group rg-2vm-network --name vm-1 --command-id RunShellScript --scripts "ping -c 4 10.0.1.5"

Ping vm1 from vm2 to confirm communication
az vm run-command invoke --resource-group rg-2vm-network --name vm-1 --command-id RunShellScript --scripts "ping -c 4 10.0.1.5"

Step 6 Clean up resource
Run the command terraform destroy
Conclusion
This project successfully demonstrates the practical implementation of Infrastructure as Code using Terraform with the AzureRM provider on Microsoft Azure. Two Linux virtual machines were provisioned within the same virtual network and subnet, configured with appropriate network security rules, and validated for secure internal communication using private IP addressing.
By enabling controlled ICMP traffic through a Network Security Group and verifying connectivity via Azure Run Command, the deployment confirms that both virtual machines can communicate without relying on public exposure. This reflects real-world cloud architecture practices where internal services interact securely within isolated network boundaries.
Beyond resource provisioning, this project showcases end-to-end cloud engineering workflow: modular Terraform structure, reusable infrastructure components, network configuration, security rule management, and operational validation through command-line automation. It highlights the ability to design, deploy, and verify scalable cloud infrastructure using modern DevOps practices.






















Top comments (0)