After going through several Terraform learning labs, I wanted to create my first Terraform artifact.
The goal was not to build a full production landing zone, but to build a clean GCP Terraform Foundation Lite project.
This project is meant to prove that I can bootstrap the basic foundation layer of a Google Cloud environment using Terraform.
It includes:
- remote Terraform state
- versioned GCS state bucket
- custom VPC network
- role-based public and private subnets
- firewall rules
- service accounts
- IAM bindings
- reusable modules
- basic naming convention
- GitHub-safe variable patterns
Why I Built This
In my earlier Terraform labs, I learned individual concepts:
- installing Terraform
- creating a VPC
- using variables
- using outputs
- storing remote state in GCS
- creating modules
- creating service accounts
- using Terraform with GitHub Actions
But those were learning labs. For this project, I wanted to convert those lessons into a cleaner artifact.
The objective was to create something that looks closer to a real Terraform repository. It's still simple, but structured enough to show a proper foundation pattern.
What This Project Builds
This project creates:
- Google Cloud Storage bucket for Terraform state
- object versioning for state recovery
- custom VPC network
- public-facing subnet
- private workload subnet
- IAP SSH firewall rule
- internal traffic firewall rule
- application service account
- CI/CD service account
- optional IAM bindings
The project is intentionally called Foundation Lite because it is not a full enterprise landing zone. It is a smaller version focused on the fundamentals.
Why Remote State Matters
By default, Terraform can store state locally in a file called:
terraform.tfstate
That is acceptable for early learning. However, from the labs I learn that local state is risky when the project becomes more serious.
If state only exists on my machine, then several problems appear:
What if the file is deleted?
What if another person needs to work on the infrastructure?
What if two people run Terraform at the same time?
What if I need to recover a previous state version?
For this project, I used Google Cloud Storage as the remote backend that stores Terraform state. Furthermore, I also enabled Object Versioning on the bucket because it gives me a recovery path if the state object is accidentally overwritten or deleted.
Reference:
https://developer.hashicorp.com/terraform/language/backend/gcs
https://cloud.google.com/storage/docs/samples/storage-bucket-tf-with-versioning
Why Bootstrap Is Separate
One important thing I learned is that the GCS bucket for Terraform state must exist before Terraform can use it as a backend.
That creates a bootstrapping problem:
Terraform needs a backend bucket.
But Terraform cannot use the backend bucket before it exists.
So I separated the project into two stages:
| Stage | Folder | Purpose |
|---|---|---|
| Stage 1 | bootstrap/state-bucket |
Creates the GCS state bucket |
| Stage 2 | foundation |
Uses the GCS bucket as backend and creates the foundation resources |
This separation makes the workflow clearer.
First, I bootstrap the state bucket.
Then I use that bucket to manage the foundation.
Repository Structure
The repository structure is:
terraform-gcp-foundation-lite/
├── README.md
├── .gitignore
├── docs/
│ └── architecture.md
├── bootstrap/
│ └── state-bucket/
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf
│ └── terraform.tfvars.example
├── foundation/
│ ├── backend.tf
│ ├── main.tf
│ ├── variables.tf
│ ├── outputs.tf
│ └── terraform.tfvars.example
└── modules/
├── network/
│ ├── main.tf
│ ├── variables.tf
│ └── outputs.tf
└── iam/
├── main.tf
├── variables.tf
└── outputs.tf
There are two main folders:
bootstrap/
foundation/
And two reusable modules:
modules/network
modules/iam
Architecture
The high-level architecture is:
Bootstrap Terraform
↓
GCS State Bucket with Object Versioning
↓
Foundation Remote Backend
↓
Foundation Root Module
↓
Network Module + IAM Module
The network module creates the VPC, subnets, and firewall rules.
The IAM module creates service accounts and optional IAM role bindings.
Important Note About Public and Private Subnets in GCP
In Google Cloud, subnets are not inherently public or private in the same way as AWS.
A subnet becomes public-facing or private based on how workloads inside it are configured.
For example:
- whether instances have external IPs
- whether traffic enters through a load balancer
- whether Cloud NAT exists
- what firewall rules allow
- how routing and exposure are designed
So in this project, I use public and private as role-based names.
They describe the intended function of each subnet.
They are not native GCP subnet types.
Stage 1: Bootstrap State Bucket
The first Terraform stage creates the GCS state bucket.
Folder:
bootstrap/state-bucket
The bucket resource looks like this:
resource "google_storage_bucket" "terraform_state" {
name = var.state_bucket_name
location = var.region
uniform_bucket_level_access = true
public_access_prevention = "enforced"
force_destroy = false
versioning {
enabled = true
}
labels = {
purpose = "terraform-state"
managed_by = "terraform"
environment = "bootstrap"
}
}
There are several important settings here.
Uniform Bucket-Level Access
This keeps access management simpler by using IAM at the bucket level.
Public Access Prevention
This prevents the state bucket from accidentally becoming public.
Terraform state can contain sensitive infrastructure information, so the state bucket should never be publicly accessible.
Object Versioning
Object Versioning gives a recovery path if the Terraform state object is accidentally deleted or overwritten.
This is important because Terraform state is critical to Terraform-managed infrastructure.
Stage 2: Foundation Backend
After the bucket exists, the foundation stage can use it as a backend.
Folder:
foundation
Example backend:
terraform {
backend "gcs" {
bucket = "your-gcp-project-id-tfstate"
prefix = "foundation"
}
}
This stores the foundation state at:
gs://your-gcp-project-id-tfstate/foundation/default.tfstate
Network Module
The network module creates:
- custom VPC
- subnets
- firewall rules
The module receives subnet definitions using a map.
Example:
subnets = {
public = {
cidr_range = "10.10.1.0/24"
role = "public-facing"
}
private = {
cidr_range = "10.10.2.0/24"
role = "private-workload"
}
}
The VPC is created as a custom mode VPC:
resource "google_compute_network" "vpc" {
name = local.final_network_name
auto_create_subnetworks = false
}
I use:
auto_create_subnetworks = false
because I want explicit control over the subnet ranges.
This is cleaner than relying on automatically created subnets.
Firewall Rules
The network module also creates firewall rules from a map.
Example:
firewall_rules = {
allow-iap-ssh = {
description = "Allow SSH through IAP only."
source_ranges = ["35.235.240.0/20"]
target_tags = ["iap-ssh"]
allow = [
{
protocol = "tcp"
ports = ["22"]
}
]
}
allow-internal = {
description = "Allow internal traffic inside the foundation CIDR range."
source_ranges = ["10.10.0.0/16"]
allow = [
{
protocol = "tcp"
ports = ["0-65535"]
},
{
protocol = "udp"
ports = ["0-65535"]
},
{
protocol = "icmp"
}
]
}
}
The IAP SSH rule is intentionally scoped to:
35.235.240.0/20
This is the source range used by IAP TCP forwarding.
I used this instead of opening SSH from:
0.0.0.0/0
That is a better security habit.
IAM Module
The IAM module creates service accounts.
Example input:
service_accounts = {
app = {
display_name = "Application Workload Service Account"
description = "Service account intended for application workloads."
roles = []
}
cicd = {
display_name = "CI/CD Terraform Service Account"
description = "Service account intended for Terraform automation."
roles = []
}
}
The module creates service accounts using for_each.
resource "google_service_account" "service_accounts" {
for_each = var.service_accounts
account_id = "${var.environment}-${var.name_prefix}-${each.key}"
display_name = each.value.display_name
description = each.value.description
}
This keeps identity creation separate from network creation.
That separation makes the repository easier to understand.
Naming Convention
The project uses a simple naming pattern:
environment-nameprefix-resource
Example:
dev-foundation-vpc
dev-foundation-public-subnet
dev-foundation-private-subnet
dev-foundation-app
dev-foundation-cicd
The goal is not to create the perfect naming standard.
The goal is to avoid random resource names.
Even a simple naming convention makes infrastructure easier to inspect later.
Git Safety
I do not commit real .tfvars files.
The repository includes:
terraform.tfvars.example
but ignores:
terraform.tfvars
The .gitignore includes:
.terraform/
*.tfstate
*.tfstate.*
*.tfvars
*.tfvars.json
!*.tfvars.example
*.tfplan
.DS_Store
This keeps local values and state files out of GitHub.
That is important because Terraform state and tfvars files can contain project-specific or sensitive information.
Running the Project
The execution order is:
1. Run bootstrap
2. Create the state bucket
3. Update foundation/backend.tf with the bucket name
4. Run foundation
5. Verify resources in GCP
Bootstrap
cd bootstrap/state-bucket
cp terraform.tfvars.example terraform.tfvars
terraform init
terraform fmt
terraform validate
terraform plan
terraform apply
Foundation
cd ../../foundation
cp terraform.tfvars.example terraform.tfvars
terraform init
terraform fmt -recursive
terraform validate
terraform plan
terraform apply
Expected Output
The foundation output should show a summary similar to:
foundation_summary = {
environment = "dev"
network_name = "dev-foundation-vpc"
subnet_count = 2
firewall_count = 2
service_accounts = ["app", "cicd"]
}
This confirms that Terraform created:
- one VPC
- two subnets
- two firewall rules
- two service accounts
Verification
I can verify the VPC:
gcloud compute networks list \
--filter="name=dev-foundation-vpc"
Verify subnets:
gcloud compute networks subnets list \
--filter="network:dev-foundation-vpc"
Verify firewall rules:
gcloud compute firewall-rules list \
--filter="network:dev-foundation-vpc"
Verify service accounts:
gcloud iam service-accounts list \
--filter="email~dev-foundation"
Verify remote state:
gcloud storage ls gs://your-gcp-project-id-tfstate/foundation/
Expected:
gs://your-gcp-project-id-tfstate/foundation/default.tfstate
Next Step
This artifact only creates the foundation layer.
The next artifact will build on top of this idea by provisioning a production-lite GCP web platform.
That project will include:
- Cloud NAT
- Managed Instance Group
- instance template
- health check
- HTTP load balancer
- private backend instances
The foundation answers:
Can I bootstrap a clean Terraform-managed GCP environment?
The next artifact answers:
Can I provision infrastructure for an actual application platform?
Top comments (0)