DEV Community

Abraham Naiborhu
Abraham Naiborhu

Posted on

Creating a VM from Network Module Outputs

In the previous lab, I created a reusable Terraform module for Google Cloud networking.

That module created:

  • a custom VPC network
  • multiple subnets
  • firewall rules
  • outputs for network, subnet, and firewall information

That was already an improvement from writing every resource directly in the root configuration.

However, the network module was still isolated. It created network resources, but nothing else was consuming those outputs yet.

So in this lab, I wanted to take the next logical step:

Create a Compute Engine VM that uses the subnet output from the network module.

At first, I considered creating the VM directly in the root module. But then I changed the format and added a second module:

modules/gcp-vm
Enter fullscreen mode Exit fullscreen mode

So now the lab has two child modules:

modules/gcp-network
modules/gcp-vm
Enter fullscreen mode Exit fullscreen mode

The purpose of this lab is to understand module composition.

The main idea is:

The network module creates the network.
The network module exposes subnet outputs.
The root module passes the selected subnet output into the VM module.
The VM module creates a VM inside that subnet.
Enter fullscreen mode Exit fullscreen mode

What This Lab Builds

This lab creates:

  • one custom VPC network
  • two subnets
  • two firewall rules
  • one Compute Engine VM
  • a startup script that installs Nginx
  • remote state in Google Cloud Storage
  • outputs from both the network module and VM module

The final result from Terraform was:

Apply complete! Resources: 6 added, 0 changed, 0 destroyed.
Enter fullscreen mode Exit fullscreen mode

The six resources were:

  1. VPC network
  2. app subnet
  3. db subnet
  4. IAP SSH firewall rule
  5. internal firewall rule
  6. Compute Engine VM

Folder Structure

The final folder structure is:

06-gcp-vm-and-network-module/
├── backend.tf
├── main.tf
├── modules
│   ├── gcp-network
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   └── gcp-vm
│       ├── main.tf
│       ├── outputs.tf
│       └── variables.tf
├── outputs.tf
├── README.md
├── startup.sh
├── terraform.tfvars
├── terraform.tfvars.example
└── variables.tf
Enter fullscreen mode Exit fullscreen mode

There are now two child modules:

Module Responsibility
gcp-network Creates VPC, subnets, and firewall rules
gcp-vm Creates the Compute Engine VM

The root module is responsible for wiring them together.

The Main Concept: Module Composition

In the previous lab, the network module created subnets and exposed them through outputs.

The output looked conceptually like this:

output "subnets" {
  value = {
    for subnet_key, subnet in google_compute_subnetwork.subnets :
    subnet_key => {
      name       = subnet.name
      id         = subnet.id
      region     = subnet.region
      cidr_range = subnet.ip_cidr_range
      self_link  = subnet.self_link
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Because of this output, the root module can access:

module.network.subnets["app"].self_link
Enter fullscreen mode Exit fullscreen mode

In this lab, that value is passed into the VM module.

The important pattern is:

network module output -> root module -> VM module input
Enter fullscreen mode Exit fullscreen mode

This was the main learning point.

Remote State

This lab still uses Google Cloud Storage as the Terraform backend.

Example backend.tf:

terraform {
  backend "gcs" {
    bucket = "terraform-gcp-learning-lab-terraform-state"
    prefix = "terraform-gcp-learning-lab/06-gcp-vm-and-network-module"
  }
}
Enter fullscreen mode Exit fullscreen mode

The state path is:

gs://terraform-gcp-learning-lab-terraform-state/terraform-gcp-learning-lab/06-gcp-vm-and-network-module/default.tfstate
Enter fullscreen mode Exit fullscreen mode

Each lab has a different backend prefix so that the state files do not collide.

Root main.tf

The root main.tf configures the provider and calls both modules.

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "6.8.0"
    }
  }
}

provider "google" {
  project = var.project
  region  = var.region
}

module "network" {
  source = "./modules/gcp-network"

  environment    = var.environment
  region         = var.region
  network_name   = var.network_name
  subnets        = var.subnets
  firewall_rules = var.firewall_rules
}

module "vm" {
  source = "./modules/gcp-vm"

  environment          = var.environment
  vm_name              = var.vm_name
  machine_type         = var.vm_machine_type
  zone                 = var.vm_zone
  tags                 = var.vm_tags
  subnetwork_self_link = module.network.subnets[var.vm_subnet_key].self_link
  startup_script_path  = "${path.module}/startup.sh"
}
Enter fullscreen mode Exit fullscreen mode

The most important line is:

subnetwork_self_link = module.network.subnets[var.vm_subnet_key].self_link
Enter fullscreen mode Exit fullscreen mode

This line means:

Get the selected subnet from the network module output and pass it into the VM module.
Enter fullscreen mode Exit fullscreen mode

If:

vm_subnet_key = "app"
Enter fullscreen mode Exit fullscreen mode

then Terraform resolves:

module.network.subnets["app"].self_link
Enter fullscreen mode Exit fullscreen mode

That is the subnet used by the VM.

Why This is Better Than Hardcoding the Subnet

Without module outputs, I could hardcode the subnet like this:

subnetwork = "dev-app-subnet"
Enter fullscreen mode Exit fullscreen mode

But that is weaker.

The VM module should not guess or hardcode the subnet.

Instead, the network module creates the subnet, exposes the subnet self-link, and the root module passes that value into the VM module.

This makes the dependency clear.

The VM depends on the subnet created by the network module.

VM Module

The VM module is responsible for creating the Compute Engine instance.

Inside:

modules/gcp-vm/main.tf
Enter fullscreen mode Exit fullscreen mode

the VM resource is defined like this:

resource "google_compute_instance" "app_vm" {
  name         = "${var.environment}-${var.vm_name}"
  machine_type = var.machine_type
  zone         = var.zone
  tags         = var.tags

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-12"
      size  = 10
      type  = "pd-balanced"
    }
  }

  network_interface {
    subnetwork = var.subnetwork_self_link
  }

  metadata_startup_script = file(var.startup_script_path)
}
Enter fullscreen mode Exit fullscreen mode

The VM does not receive an external IP address because there is no access_config block inside the network_interface.

That means the VM is private.

This is intentional.

For this lab, I wanted the VM to use an internal IP only.

Startup Script

The VM uses a startup script to install Nginx.

File:

startup.sh
Enter fullscreen mode Exit fullscreen mode
#!/bin/bash
set -euo pipefail

apt-get update -y
apt-get install -y nginx

cat > /var/www/html/index.html <<EOF
<!doctype html>
<html>
  <head>
    <title>Terraform Module Output VM</title>
  </head>
  <body>
    <h1>Hello from Terraform</h1>
    <p>This VM was created using a subnet output from the network module.</p>
  </body>
</html>
EOF

systemctl enable nginx
systemctl restart nginx
Enter fullscreen mode Exit fullscreen mode

The startup script is passed into the VM module using:

startup_script_path = "${path.module}/startup.sh"
Enter fullscreen mode Exit fullscreen mode

Then the VM module reads it using:

metadata_startup_script = file(var.startup_script_path)
Enter fullscreen mode Exit fullscreen mode

Firewall Rules

The network module creates two firewall rules.

IAP SSH Rule

dev-allow-iap-ssh
Enter fullscreen mode Exit fullscreen mode

This allows SSH from:

35.235.240.0/20
Enter fullscreen mode Exit fullscreen mode

with the target tag:

iap-ssh
Enter fullscreen mode Exit fullscreen mode

The VM also has the tag:

iap-ssh
Enter fullscreen mode Exit fullscreen mode

That means the IAP SSH firewall rule applies to this VM.

Internal Rule

dev-allow-internal
Enter fullscreen mode Exit fullscreen mode

This allows internal traffic from:

10.50.0.0/16
Enter fullscreen mode Exit fullscreen mode

This is used for internal communication between the lab subnets.

Variables

The local values are stored in terraform.tfvars.

project      = "terraform-gcp-learning-lab"
region       = "asia-southeast2"
environment  = "dev"
network_name = "network-module"

subnets = {
  app = {
    cidr_range = "10.50.1.0/24"
  }

  db = {
    cidr_range = "10.50.2.0/24"
  }
}

firewall_rules = {
  allow-iap-ssh = {
    description   = "Allow SSH through IAP only."
    source_ranges = ["35.235.240.0/20"]
    target_tags   = ["iap-ssh"]

    allow = [
      {
        protocol = "tcp"
        ports    = ["22"]
      }
    ]
  }

  allow-internal = {
    description   = "Allow internal traffic between lab subnets."
    source_ranges = ["10.50.0.0/16"]

    allow = [
      {
        protocol = "tcp"
        ports    = ["0-65535"]
      },
      {
        protocol = "udp"
        ports    = ["0-65535"]
      },
      {
        protocol = "icmp"
      }
    ]
  }
}

vm_name         = "module-output-vm"
vm_machine_type = "e2-micro"
vm_zone         = "asia-southeast2-a"
vm_subnet_key   = "app"
vm_tags         = ["iap-ssh"]
Enter fullscreen mode Exit fullscreen mode

The important VM value is:

vm_subnet_key = "app"
Enter fullscreen mode Exit fullscreen mode

This tells Terraform to place the VM in the app subnet.

Initialize Terraform

After preparing the files, I ran:

terraform init
Enter fullscreen mode Exit fullscreen mode

Because this lab uses two child modules, Terraform initializes both modules.

Expected output includes:

Initializing modules...
- network in modules/gcp-network
- vm in modules/gcp-vm
Enter fullscreen mode Exit fullscreen mode

Format and Validate

Because this lab has nested module folders, I used:

terraform fmt -recursive
Enter fullscreen mode Exit fullscreen mode

Then I validated the configuration:

terraform validate
Enter fullscreen mode Exit fullscreen mode

Expected output:

Success! The configuration is valid.
Enter fullscreen mode Exit fullscreen mode

Terraform Plan

Next, I ran:

terraform plan
Enter fullscreen mode Exit fullscreen mode

Terraform planned to create six resources:

Plan: 6 to add, 0 to change, 0 to destroy.
Enter fullscreen mode Exit fullscreen mode

The planned resources included:

module.network.google_compute_firewall.ingress_rules["allow-iap-ssh"]
module.network.google_compute_firewall.ingress_rules["allow-internal"]
module.network.google_compute_network.vpc_network
module.network.google_compute_subnetwork.subnets["app"]
module.network.google_compute_subnetwork.subnets["db"]
module.vm.google_compute_instance.app_vm
Enter fullscreen mode Exit fullscreen mode

This plan output is important because it shows the module boundaries.

Network resources are created under:

module.network
Enter fullscreen mode Exit fullscreen mode

The VM is created under:

module.vm
Enter fullscreen mode Exit fullscreen mode

Apply

After reviewing the plan, I applied the configuration:

terraform apply
Enter fullscreen mode Exit fullscreen mode

Then I typed:

yes
Enter fullscreen mode Exit fullscreen mode

Terraform completed successfully:

Apply complete! Resources: 6 added, 0 changed, 0 destroyed.
Enter fullscreen mode Exit fullscreen mode

Terraform Outputs

After apply, I checked:

terraform output
Enter fullscreen mode Exit fullscreen mode

The output showed the network, subnet, firewall, and VM information.

lab_summary = {
  "environment" = "dev"
  "firewall_count" = 2
  "network_name" = "dev-network-module"
  "project" = "terraform-gcp-learning-lab"
  "region" = "asia-southeast2"
  "subnet_count" = 2
  "vm_external_ip" = "none"
  "vm_internal_ip" = "10.50.1.2"
  "vm_name" = "dev-module-output-vm"
  "vm_subnet_key" = "app"
  "vm_subnet_link" = "https://www.googleapis.com/compute/v1/projects/terraform-gcp-learning-lab/regions/asia-southeast2/subnetworks/dev-app-subnet"
  "vm_zone" = "asia-southeast2-a"
}
Enter fullscreen mode Exit fullscreen mode

This confirms several important things:

Output Meaning
vm_name VM was created as dev-module-output-vm
vm_internal_ip VM received internal IP 10.50.1.2
vm_external_ip VM has no external IP
vm_subnet_key VM selected the app subnet
vm_subnet_link VM consumed the app subnet self-link from the network module

The VM output also showed:

vm_selected_subnet_key = "app"
vm_selected_subnet_self_link = "https://www.googleapis.com/compute/v1/projects/terraform-gcp-learning-lab/regions/asia-southeast2/subnetworks/dev-app-subnet"
Enter fullscreen mode Exit fullscreen mode

This proves that the VM was attached to the subnet created by the network module.

Subnet Outputs

The subnet outputs showed:

subnets = {
  "app" = {
    "cidr_range" = "10.50.1.0/24"
    "id" = "projects/terraform-gcp-learning-lab/regions/asia-southeast2/subnetworks/dev-app-subnet"
    "name" = "dev-app-subnet"
    "region" = "asia-southeast2"
    "self_link" = "https://www.googleapis.com/compute/v1/projects/terraform-gcp-learning-lab/regions/asia-southeast2/subnetworks/dev-app-subnet"
  }
  "db" = {
    "cidr_range" = "10.50.2.0/24"
    "id" = "projects/terraform-gcp-learning-lab/regions/asia-southeast2/subnetworks/dev-db-subnet"
    "name" = "dev-db-subnet"
    "region" = "asia-southeast2"
    "self_link" = "https://www.googleapis.com/compute/v1/projects/terraform-gcp-learning-lab/regions/asia-southeast2/subnetworks/dev-db-subnet"
  }
}
Enter fullscreen mode Exit fullscreen mode

This confirms that the network module created both the app and db subnets.

Firewall Outputs

The firewall outputs showed:

firewall_rules = {
  "allow-iap-ssh" = {
    "id" = "projects/terraform-gcp-learning-lab/global/firewalls/dev-allow-iap-ssh"
    "name" = "dev-allow-iap-ssh"
    "source_ranges" = toset([
      "35.235.240.0/20",
    ])
    "target_tags" = toset([
      "iap-ssh",
    ])
  }
  "allow-internal" = {
    "id" = "projects/terraform-gcp-learning-lab/global/firewalls/dev-allow-internal"
    "name" = "dev-allow-internal"
    "source_ranges" = toset([
      "10.50.0.0/16",
    ])
    "target_tags" = toset(null) /* of string */
  }
}
Enter fullscreen mode Exit fullscreen mode

The IAP SSH rule targets instances with the tag:

iap-ssh
Enter fullscreen mode Exit fullscreen mode

The VM also uses:

vm_tags = ["iap-ssh"]
Enter fullscreen mode Exit fullscreen mode

So the firewall rule is connected to the VM through network tags.

Verify the VM

I can verify the VM using:

gcloud compute instances list --filter="name=dev-module-output-vm"
Enter fullscreen mode Exit fullscreen mode

I can also inspect the VM network interface:

gcloud compute instances describe dev-module-output-vm \
  --zone=asia-southeast2-a \
  --format="value(networkInterfaces[0].subnetwork,networkInterfaces[0].networkIP,networkInterfaces[0].accessConfigs)"
Enter fullscreen mode Exit fullscreen mode

The expected result is:

subnetwork: dev-app-subnet
internal IP: 10.50.1.2
external IP/accessConfigs: empty
Enter fullscreen mode Exit fullscreen mode

This means the VM is private and does not have an external IP.

Verify Remote State

Because this lab uses the GCS backend, the state is stored remotely.

I can verify it with:

gcloud storage ls gs://terraform-gcp-learning-lab-terraform-state/terraform-gcp-learning-lab/06-gcp-vm-and-network-module/
Enter fullscreen mode Exit fullscreen mode

Expected output:

gs://terraform-gcp-learning-lab-terraform-state/terraform-gcp-learning-lab/06-gcp-vm-and-network-module/default.tfstate
Enter fullscreen mode Exit fullscreen mode

Destroy

Because this is still a learning lab, I destroyed the resources after testing:

terraform destroy
Enter fullscreen mode Exit fullscreen mode

Then I typed:

yes
Enter fullscreen mode Exit fullscreen mode

Terraform destroyed the resources managed by this lab.

What I Learned

This lab helped me understand Terraform module composition more clearly.

In the previous lab, I had one child module:

gcp-network
Enter fullscreen mode Exit fullscreen mode

In this lab, I added another child module:

gcp-vm
Enter fullscreen mode Exit fullscreen mode

The most important lesson was not just creating a VM.

The important lesson was passing an output from one module into another module.

The flow is:

gcp-network creates subnets
gcp-network outputs subnet self-links
root module selects the app subnet
root module passes the subnet self-link into gcp-vm
gcp-vm creates a VM in that subnet
Enter fullscreen mode Exit fullscreen mode

The key Terraform expression is:

module.network.subnets[var.vm_subnet_key].self_link
Enter fullscreen mode Exit fullscreen mode

Breaking it down:

module.network
Enter fullscreen mode Exit fullscreen mode

means the network child module.

.subnets
Enter fullscreen mode Exit fullscreen mode

means the subnet output from that module.

[var.vm_subnet_key]
Enter fullscreen mode Exit fullscreen mode

means select one subnet from the subnet map.

.self_link
Enter fullscreen mode Exit fullscreen mode

means use the selected subnet self-link.

So if:

vm_subnet_key = "app"
Enter fullscreen mode Exit fullscreen mode

Terraform uses:

module.network.subnets["app"].self_link
Enter fullscreen mode Exit fullscreen mode

This is then passed into the VM module.

That is the main pattern I wanted to learn:

module output -> root module -> another module input
Enter fullscreen mode Exit fullscreen mode

Next Step

The next logical step is to improve the VM access pattern.

Right now, the VM has no external IP and has an IAP SSH firewall rule.

The next lab could focus on testing private VM access through IAP, or improving the module further by adding:

  • service account for the VM
  • IAM binding for IAP SSH
  • OS Login configuration
  • startup script verification
  • HTTP health check or internal-only web service testing

Top comments (0)