DEV Community

Cover image for Exploring GCP With Terraform: VPCs, Firewall Rules And VMs
Robert Nemet
Robert Nemet

Posted on • Originally published at rnemet.dev on

Exploring GCP With Terraform: VPCs, Firewall Rules And VMs

This post will continue my previous post Exploring GCP With Terraform: Setting Up The Environment And Project.
In this post, I'll:

  • Create VPC with Terraform
  • Create subnets/firewall rules
  • Create VMs in the VPC
  • How to access to VMs from outside and inside the VPC

Some Info Here
Also, I'll mention how to use gcloud to fetch information about created resources. At the end, I'll mention two useful Terraform CLI commands. And why I'm structuring the project as I do.

This structuring is one of many ways to do it. It is just my way of doing it for now. The project needs refactoring, but for now, it is good enough.


Some Info Here
I replaced the real project ID with _project-id_in the following examples. You need to replace it with your project ID.

DevCube | Robert Nemet | Substack

Weekly rant about software design, devops, kubernetes, sre... Click to read DevCube, by Robert Nemet, a Substack publication. Launched 6 months ago.

favicon rnemet.substack.com

What is VPC in GCP?

Virtual Private Network(VPC) is a virtual network defined in the cloud. It is a private network isolated from other networks in the cloud. It is responsible for:

  • Connectivity between VMs
  • Traffic distribution from GC load balancers to backends,
  • Connecting on-premises networks with Cloud VPN or Cloud Interconnect,
  • Network Load Balancers and proxies for internal Application Load Balancers.

VPC is a global resource in GCP. A global resource means it is not bound to any region or zone. It is a logical resource that spans all regions.

Some Info Here
We can distinguish between two types of resources: global and regional. Global resources are not bound to any region or zone. They are logical resources that span all regions.

Regional resources are bound to a specific region. They are physical resources that exist in a specific region. Each region has at least three zones. Zones are isolated from each other and independent of each other. If one zone fails, other zones are not affected. You can perceive a zone as a data center. They are connected to a high-bandwidth, low-latency network.


You can imagine VPC as a logical container that holds all the other networking resources like firewalls, routes, subnets, etc.

Subnets

Subnets are logical partitions of the VPC's IP address range. They are regional resources, which means they are bound to a specific region. You can specify the IP address range as IPv4 (single stack) or IPv4/IPv6 (dual-stack). They are allocating IP addresses to other resources in the subnet's region.

Firewall Rules

Firewall rules are used to control traffic to and from different destinations. They are applied to the VPC network and are enforced at the VM instance level. Every VPC
network has two implied firewall rules:

  • implied allow egress. Egress(outgoing) traffic is allowed to all destinations.
  • implied deny ingress. Ingress(incoming) traffic is denied from all sources.

As well as the implied rules, you can create your own rules.

Creating VPC with Terraform

I set up a base for my Terraform project in a previous post. Now, next to the base directory, I'm creating a new network directory
with the same file structure. Let's start with the provider.tf file:

provider "google" {
  project = var.project_id
  region  = var.region
  zone    = var.zone
}


terraform {
  required_version = ">=1.5.5"

  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "4.77.0"
    }
  }

  backend "gcs" {
    bucket = "terraform-states-project-id"
    prefix = "terraform/state/network"
  }
}
Enter fullscreen mode Exit fullscreen mode

Provider for all workflows is the same as in the base directory. The difference is the backend.gcs.prefix, which is now terraform/state/network. I'm using the same bucket. For the base workflow, you can change it to terraform/state/base.

Some Info Here

I'm splitting state files by workflows. The reason is to minimize the impact of changes from one workflow to another. There will be a dependency between workflows, but it should be minimal. Splitting state files would be necessary if multiple teams worked on the same project.

And now creating the VPC in main.tf:

# vpc: back office
resource "google_compute_network" "back_office" {
  name                    = "back-office"
  description             = "Back office network"
  auto_create_subnetworks = false
  routing_mode            = "REGIONAL"
}
Enter fullscreen mode Exit fullscreen mode

I'm creating a VPC named back-office and turning off the auto-creation of subnets. It is recommended to set auto_create_subnetworks to false and create subnets
manually. This way, you have more control over the subnets. Otherwise, GCP will create a subnet in each region.

Some Info Here

You can list all VPCs in a project with the following command:

$ gcloud compute networks list

NAME         SUBNET_MODE  BGP_ROUTING_MODE  IPV4_RANGE  GATEWAY_IPV4
back-office  CUSTOM       REGIONAL
default      AUTO         REGIONAL
services     CUSTOM       REGIONAL

Notice a default VPC with AUTO subnet mode. The default VPC is created when you make a project. It has a subnet in each region. I created other VPCs.

To get info on a specific VPC, try:

$ gcloud compute networks describe back-office

autoCreateSubnetworks: false
creationTimestamp: '2023-08-20T07:06:11.543-07:00'
description: Back office network
id: '1334556407227679868'
kind: compute#network
name: back-office
networkFirewallPolicyEnforcementOrder: AFTER_CLASSIC_FIREWALL
routingConfig:
  routingMode: REGIONAL
selfLink: https://www.googleapis.com/compute/v1/projects/project-id/global/networks/back-office
selfLinkWithId: https://www.googleapis.com/compute/v1/projects/project-id/global/networks/1334556407227679868
subnetworks:
- https://www.googleapis.com/compute/v1/projects/project-id/regions/us-central1/subnetworks/back-office
x_gcloud_bgp_routing_mode: REGIONAL
x_gcloud_subnet_mode: CUSTOM

I am setting the routing mode to REGIONAL. The routing_mode can be set to REGIONAL or GLOBAL. The REGIONAL means advertising routes to subnets in the same
region. The GLOBAL means advertising routes to subnets in all regions.

As mentioned, more is needed. I need to add a subnet to the VPC:

# subnet for back office
resource "google_compute_subnetwork" "back_office" {
  name          = "back-office"
  ip_cidr_range = "10.1.0.0/24"
  network       = google_compute_network.back_office.self_link
  region        = var.region
}
Enter fullscreen mode Exit fullscreen mode

I'm creating a subnet named back-office and IP range 10.1.0.0/24. This subnet is attached to the back_office VPC in a predefined region.

Some Info Here

The self_link refers to the resource and is a unique identifier for the resource. It is used to reference the resource in other resources.

The IP range is expressed as CIDR notation. The CIDR notation is a compact representation of an IP address and its associated routing prefix. The prefix is written after the IP address, and the prefix length is indicated by the number of bits set to 1 in the subnet mask. Check resources for more info.

So, I should have a VPC with a subnet. You can check the details of this subnet with gcloud:

$ gcloud compute networks subnets describe back-office --region us-central1

creationTimestamp: '2023-08-21T13:12:57.423-07:00'
enableFlowLogs: false
fingerprint: crXvadZ1JQ=
gatewayAddress: 10.1.0.1
id: '88832453211859900582'
ipCidrRange: 10.1.0.0/24
kind: compute#subnetwork
logConfig:
  aggregationInterval: INTERVAL_5_SEC
  enable: false
  flowSampling: 0.5
  metadata: INCLUDE_ALL_METADATA
name: back-office
network: https://www.googleapis.com/compute/v1/projects/project-id/global/networks/back-office
privateIpGoogleAccess: false
privateIpv6GoogleAccess: DISABLE_GOOGLE_ACCESS
purpose: PRIVATE
region: https://www.googleapis.com/compute/v1/projects/project-id/regions/us-central1
selfLink: https://www.googleapis.com/compute/v1/projects/project-id/regions/us-central1/subnetworks/back-office
stackType: IPV4_ONLY
Enter fullscreen mode Exit fullscreen mode

Add a VM

Now is the time to test this. I'm creating a VM in this subnet:

resource "google_compute_instance" "bo_vm_test_alpha" {
  name         = "bo-vm-test-alpha"
  machine_type = "f1-micro"
  zone         = var.zone
  tags         = ["bo-vm-test"]

  scheduling {
    preemptible        = true
    automatic_restart  = false
    provisioning_model = "SPOT"
  }

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
    }
  }
  network_interface {
    network    = google_compute_network.back_office.self_link
    subnetwork = google_compute_subnetwork.back_office.self_link
  }
}
Enter fullscreen mode Exit fullscreen mode

I'm creating a VM named bo-vm-test-alpha and machine type f1-micro. The VM is made in the zone us-central1-a. The VM is tagged with the bo-vm-test tag. The VM is preemptible, and it is using a spot instance. The boot disk is using the Debian 11 image. The VM is attached to the back_office VPC and back-office subnet.

This VM would be the cheapest in GCP, and it is not suitable for production. It is just for testing purposes. If you want to access it, go to the Console. There, choose the right project, and you should see your VM. Then go to the SSH button and click on it. It will open a new window with a terminal. You should be able to access the VM. Or not.

Add Firewall Rules

You can't. Why? Because I didn't create a firewall rule to allow SSH:

resource "google_compute_firewall" "back_office_ssh" {
  name    = "back-office-ssh"
  network = google_compute_network.back_office.self_link

  source_ranges = ["0.0.0.0/0"]
  direction     = "INGRESS"
  allow {
    protocol = "tcp"
    ports    = ["22"]
  }
}
Enter fullscreen mode Exit fullscreen mode

I'm creating a firewall rule named back-office-ssh, applied to the back_office VPC. The source range is 0.0.0.0/0, which means all internet. The direction is INGRESS, allowing TCP traffic on port 22.

This time, let's try to connect with gcloud:

$ gcloud compute ssh bo-vm-test-alpha --zone=us-central1-c  --project=project-id
External IP address was not found; defaulting to using IAP tunneling.
WARNING:

To increase the performance of the tunnel, consider installing NumPy. For instructions,
please see https://cloud.google.com/iap/docs/using-tcp-forwarding#increasing_the_tcp_upload_bandwidth

ERROR: (gcloud.compute.start-iap-tunnel) Error while connecting [4003: 'failed to connect to backend']. (Failed to connect to port 22)
kex_exchange_identification: Connection closed by remote host
Connection closed by UNKNOWN port 65535

Recommendation: To check for possible causes of SSH connectivity issues and get
recommendations, rerun the ssh command with the --troubleshoot option.

gcloud compute ssh bo-vm-test-alpha --project=project-id --zone=us-central1-c --troubleshoot

Or, to investigate an IAP tunneling issue:

gcloud compute ssh bo-vm-test-alpha --project=project-id --zone=us-central1-c --troubleshoot --tunnel-through-iap
Enter fullscreen mode Exit fullscreen mode

It is not working. I have two options to fix this: to add an external IP address to the VM or to use IAP tunneling. I'll go with the second option. IAP tunneling is used to connect to VMs without external IP addresses. The VM I just created does not have an external IP address. So, let's enable IAP tunneling:

$ gcloud services enable iap.googleapis.com
Enter fullscreen mode Exit fullscreen mode

And add a firewall rule to allow ingress from the Cloud IAP for TCP forwarding:

resource "google_compute_firewall" "back_office_iap" {
  name    = "back-office-iap"
  network = google_compute_network.back_office.self_link

  source_ranges = ["35.235.240.0/20"]
  direction     = "INGRESS"
  allow {
    protocol = "tcp"
  }
}
Enter fullscreen mode Exit fullscreen mode

The CIDR range 35.235.240.0/20 is the range used by IAP. GCP specifies it see here for more info.

Run terraform apply. Please wait for it to finish and try again. This time, it should work:

 gcloud compute ssh bo-vm-test-alpha --zone=us-central1-c  --project=project-id
External IP address was not found; defaulting to using IAP tunneling.
WARNING:

To increase the performance of the tunnel, consider installing NumPy. For instructions,
please see https://cloud.google.com/iap/docs/using-tcp-forwarding#increasing_the_tcp_upload_bandwidth

Warning: Permanently added 'compute.89653456855762463' (ED25519) to the list of known hosts.
Linux bo-vm-test-alpha 5.10.0-24-cloud-amd64 #1 SMP Debian 5.10.179-5 (2023-08-08) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
rnemet@bo-vm-test-alpha:~$
Enter fullscreen mode Exit fullscreen mode

It is working. I connect to the VM. Now, I can connect to the VM from another VM in the same subnet. I'll create another VM in the same subnet. Just name it bo-vm-test-beta. When done, check it with gcloud:

$ gcloud compute instances list
NAME              ZONE           MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP  STATUS
bo-vm-test-alpha  us-central1-c  f1-micro      true         10.1.0.5                  RUNNING
bo-vm-test-beta   us-central1-c  f1-micro      true         10.1.0.6                  RUNNING
Enter fullscreen mode Exit fullscreen mode

I see that alpha has IP 10.1.0.5 and beta has IP 10.1.0.6. Now I'll try to ping alpha from beta. But first, I need to allow ICMP traffic:

resource "google_compute_firewall" "back_office_icmp" {
  name    = "back-office-icmp"
  network = google_compute_network.back_office.self_link

  source_ranges = ["0.0.0.0/0"]
  direction     = "INGRESS"
  allow {
    protocol = "icmp"
  }
}
Enter fullscreen mode Exit fullscreen mode

And then let's try to ping alpha from the beta:

$ gcloud compute ssh bo-vm-test-beta --zone=us-central1-c  --project=project-id
External IP address was not found; defaulting to using IAP tunneling.
WARNING:

To increase the performance of the tunnel, consider installing NumPy. For instructions,
please see https://cloud.google.com/iap/docs/using-tcp-forwarding#increasing_the_tcp_upload_bandwidth

Linux bo-vm-test-beta 5.10.0-24-cloud-amd64 #1 SMP Debian 5.10.179-5 (2023-08-08) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Tue Aug 22 21:13:44 2023 from 35.235.244.34
rnemet@bo-vm-test-beta:~$ ping 10.1.0.6
PING 10.1.0.6 (10.1.0.6) 56(84) bytes of data.
64 bytes from 10.1.0.6: icmp_seq=1 ttl=64 time=0.014 ms
64 bytes from 10.1.0.6: icmp_seq=2 ttl=64 time=0.045 ms
64 bytes from 10.1.0.6: icmp_seq=3 ttl=64 time=0.035 ms
64 bytes from 10.1.0.6: icmp_seq=4 ttl=64 time=0.037 ms
64 bytes from 10.1.0.6: icmp_seq=5 ttl=64 time=0.029 ms
64 bytes from 10.1.0.6: icmp_seq=6 ttl=64 time=0.036 ms
64 bytes from 10.1.0.6: icmp_seq=7 ttl=64 time=0.022 ms
64 bytes from 10.1.0.6: icmp_seq=8 ttl=64 time=0.035 ms
64 bytes from 10.1.0.6: icmp_seq=9 ttl=64 time=0.028 ms
64 bytes from 10.1.0.6: icmp_seq=10 ttl=64 time=0.036 ms
64 bytes from 10.1.0.6: icmp_seq=11 ttl=64 time=0.036 ms
^C
--- 10.1.0.6 ping statistics ---
11 packets transmitted, 11 received, 0% packet loss, time 10232ms
rtt min/avg/max/mdev = 0.014/0.032/0.045/0.008 ms
rnemet@bo-vm-test-beta:~$
Enter fullscreen mode Exit fullscreen mode

Some Info Here
You'll need to wait a few minutes for the firewall rule to take effect. Or restart the VM:
$ gcloud compute instances reset bo-vm-test-beta bo-vm-test-alpha --zone=us-central1-c
Updated [https://www.googleapis.com/compute/v1/projects/project-id/zones/us-central1-c/instances/bo-vm-test-beta].
Updated [https://www.googleapis.com/compute/v1/projects/project-id/zones/us-central1-c/instances/bo-vm-test-alpha].

Terraform CLI: fmt and validate

Frequent code changes make a mess. It is good practice to keep the code clean and formatted. Terraform has a command to format the code: terraform fmt. It will format the code in the current directory. It will also format all the files in the subdirectories. It is a good practice to run this command before committing the code.

On the other side, terraform validate will check the syntax of the code. It will check if the code is valid and complete. It will not check if the code is correct. For example, if you have a typo in the resource name, it will not catch it. It will detect if you have a typo in the resource type. It will also check if all the variables are defined.

Conclusion

In this post, I created a VPC with a subnet and a VM in the subnet. I also made firewall rules to allow SSH and ICMP traffic. I used gcloud to check the created resources. I also used gcloud to connect to the VMs. I used IAP tunneling to connect to the VM without an external IP address. I also used terraform fmt and terraform validate to format and validate the code.

I'll create a second VPC and a VM for the next post. Then, explore how to connect VMs from different VPCs. As well as how to route traffic between VPCs.

If you find this helpful, please share it with others. If you have any questions or comments, please let me know in newsletter comments or via email.

DevCube | Robert Nemet | Substack

Weekly rant about software design, devops, kubernetes, sre... Click to read DevCube, by Robert Nemet, a Substack publication. Launched 6 months ago.

favicon rnemet.substack.com

Resources

Top comments (0)