Introduction to Terraform
Terraform is an Infrastructure as Code (IaC) tool developed by HashiCorp. It allows you to build, change, and version your infrastructure safely and efficiently.
Here are some key features of Terraform:
- Human-Readable Configuration Files: Terraform lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share.
- Multi-Cloud Support: Terraform can manage infrastructure on multiple cloud platforms. Providers enable Terraform to work with virtually any platform or service with an accessible API.
Lifecycle Management: The core Terraform workflow consists of three stages:
Write: Define resources across multiple cloud providers and services.
Plan: Terraform creates an execution plan describing what it will create, update, or destroy.
Apply: On approval, Terraform performs the proposed operations in the correct order, respecting any resource dependencies.
State Management: Terraform keeps track of your real infrastructure in a state file, which acts as a source of truth for your environment.
Provider Ecosystem: HashiCorp and the Terraform community have written thousands of providers to manage many different types of resources and services.
Please visit the homepage to install Terraform according to your operating system here
After successfully installing, use the following command to check the version of Terraform:
terraform version
Prerequisites
Before proceeding, you need to prepare the following:
- A Google Cloud account with billing enabled and necessary services such as Compute Engine, Kubernetes Engine enabled.
- Installed gcloud, kubectl.
- Understanding of Google Kubernetes Engine, clusters, and Docker. If you're unsure, you can refer to this article to gain some basic knowledge.
Deploying a Docker Image to GKE
1. Login to gcloud
In this step, you can use an existing project, or if you don't want to affect existing projects, it's better to create a new project. After completing the deployment, you can simply delete the project to release resources.
Once you've determined which project to work with, switch to that project, retrieve the project ID, and log in to use it with Terraform.
# get project id
gcloud config get-value project
# authen to working with terraform
gcloud auth application-default login --project {project id}
2. Terraform codebase
Create a Terraform project with the following file contents:
First, let's create a file named provider.tf to define the information of the Google Cloud Provider.
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "5.18.0"
}
}
}
provider "google" {
project = var.projectId
region = var.region
}
variable.tf file to define the variables we'll use in this project.
variable "projectId" {
type = string
description = "Project ID"
}
variable "location" {
type = string
description = "Location"
}
variable "region" {
type = string
description = "Region"
}
variable "clusterName" {
type = string
description = "Cluster name"
}
variable "machineType" {
type = string
description = "Node Instance machine type"
}
variable "nodeCount" {
type = number
description = "Number of nodes in the node pool"
}
variable "dockerImage" {
type = string
description = "Docker Image"
}
Create a file named terraform.tfvars to define the default variable values. You'll need to change these values to fit your needs.
projectId = "project-id"
location = "asia-southeast1-a"
region = "asia-southeast1"
clusterName = "k8s-cluster"
machineType = "e2-micro"
nodeCount = 1
dockerImage = "{host}/{project-id}/{image name}:{version}"
If you want to learn about building a Docker image and publishing it to the GCP Container Registry for use in this article, you can find it here.
The cluster.tf file is used to initialize the Kubernetes cluster.
resource "google_container_cluster" "default" {
name = var.clusterName
location = var.location
initial_node_count = var.nodeCount
deletion_protection = false
node_config {
preemptible = true
machine_type = var.machineType
}
}
The k8s.tf file is used to create Pod Deployment and LoadBalancer Service.
data "google_client_config" "default" {}
provider "kubernetes" {
host = "https://${google_container_cluster.default.endpoint}"
token = data.google_client_config.default.access_token
cluster_ca_certificate = base64decode(google_container_cluster.default.master_auth[0].cluster_ca_certificate)
}
resource "kubernetes_deployment_v1" "default" {
metadata {
name = "deployment-name"
}
spec {
replicas = 1
selector {
match_labels = {
app = "label-name"
}
}
template {
metadata {
labels = {
app = "label-name"
}
}
spec {
container {
name = "express-ts"
image = var.dockerImage
}
}
}
}
}
resource "kubernetes_service_v1" "default" {
metadata {
name = "service-name"
}
spec {
selector = {
app = kubernetes_deployment_v1.default.spec[0].selector[0].match_labels.app
}
port {
port = 80
target_port = 3000
}
type = "LoadBalancer"
}
depends_on = [time_sleep.wait_service_cleanup]
}
resource "time_sleep" "wait_service_cleanup" {
depends_on = [google_container_cluster.default]
destroy_duration = "180s"
}
Here, kubernetes_deployment_v1 defines the Docker image to deploy, while the LoadBalancer Service is used to map ports between the Docker container for external access.
Next, create an additional file named output.tf to print out the necessary information after the execution process is successful.
output "cluster_endpoint" {
description = "Cluster endpoint"
value = google_container_cluster.default.endpoint
}
output "load_balancer_hostname" {
description = "LoadBalancer EXTERNAL-IP"
value = kubernetes_service_v1.default.status.0.load_balancer.0.ingress.0.ip
}
3. Execute Terraform commands
Please execute each of the following commands to proceed with the deployment on Google Cloud:
# initialize the terraform configuration
terraform init
# generate and show an execution plan
terraform plan
# apply the execution plan to provision resources
terraform apply
After successfully executing, you will see the following results:
You can check if the cluster has been initialized as follows:
Check Pods, Deployments, and Docker container running:
View information about the LoadBalancer Service that has been created:
Please access the EXTERNAL-IP field to check if the NodeJS Server has been deployed successfully. After executing the terraform apply command, you will also see similar results in the load_balancer_hostname field.
4. Clean up resources
To delete the resources created by Terraform, please execute the following command:
terraform destroy
Conclusion
Through this article, I have guided you on how to use Terraform as an Infrastructure as Code (IaC) tool to deploy a docker image on Google Kubernetes Engine. This means that our entire infrastructure has been deployed using code without the need for manual configuration. This is the greatest advantage of Terraform, as configuration information can be easily edited, shared, and applied to cloud providers in a simple and efficient manner.
If you have any suggestions or questions regarding the content of the article, please don't hesitate to leave a comment below!
If you found this content helpful, please visit the original article on my blog to support the author and explore more interesting content.
Top comments (0)