DEV Community

Cover image for Understanding Terraform: A Guide to Effective IaC Practices
Hassan Aftab
Hassan Aftab

Posted on

Understanding Terraform: A Guide to Effective IaC Practices

What is Terraform?

Terraform is an infrastructure as code (IaC) tool that allows you to build, change, and version cloud and on-premises resources safely and efficiently.

With Terraform, you define your infrastructure using human-readable configuration files, which can be versioned, reused, and shared.

It works with a wide range of platforms and services through their APIs, enabling you to manage both low-level components (such as compute instances, storage, and networking) in a consistent manner.

The 3 Stage Workflow:

The Coding Stage:

Define resources across one or multiple cloud providers and services in your configuration files, depending on your requirements.

Here is a sample project structure:

.
├── bicep
│   ├── deploy.ps1
│   ├── init.bicep
│   ├── params
│   │   ├── dev.bicepparam
│   │   └── test.bicepparam
│   └── storage.bicep
├── LICENSE
├── Makefile
├── README.md
└── terraform
    ├── modules
    │   ├── container_app
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   └── variables.tf
    │   ├── container_app_environment
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   └── variables.tf
    │   ├── container_registry
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   └── variables.tf
    │   ├── resource_group
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   └── variables.tf
    │   ├── subnet
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   └── variables.tf
    │   ├── subnet_network_security_group_association
    │   │   ├── main.tf
    │   │   ├── outputs.tf
    │   │   └── variables.tf
    │   └── virtual_network
    │       ├── main.tf
    │       ├── outputs.tf
    │       └── variables.tf
    └── resources
        ├── backend.tf
        ├── data.tf
        ├── main.tf
        ├── outputs.tf
        ├── provider.tf
        ├── tfvars
        │   ├── dev.tfvars
        │   ├── eun_region.tfvars
        │   └── tags.tfvars
        └── variables.tf
Enter fullscreen mode Exit fullscreen mode

You can however, code the entire thing in a single file if you want, but as we all know, it is considered as a best practice, to adhere to separation of concerns

Lets breakdown the project structure:

  • directories:
    • bicep
    • terraform
    • terraform/module
    • terraform/resources
  • files:
    • all files in directories inside terraform/modules/ contain modules for individual resources
    # backend.tf
    # Here we define the provider to use for this directory

    terraform {
    backend "azurerm" {
        storage_account_name = "storageAccountName"
        container_name       = "tfstates"
        resource_group_name  = "resourceGroupName"
        key                  = "resources.tfstate"
    }
    }
Enter fullscreen mode Exit fullscreen mode
    # data.tf
    # Here we define the data sources to use for this directory

    data "terraform_remote_state" "resources" {
    backend = "azurerm"

        config = {
            storage_account_name = "storageAccountName"
            container_name       = "tfstates"
            resource_group_name  = "resourceGroupName"
            key                  = "resources.tfstate"
        }
    }

    # In this case, we use the data source to get the existing resource group

    data "azurerm_resource_group" "existing" {
        name = "resourceGroupName"
    }

Enter fullscreen mode Exit fullscreen mode

As shown above, in the directory structure, the modules are defined in the terraform/modules directory and the resources are defined in the terraform/resources directory. The main codebase of this project resides in the terraform/resources/main.tf file.

Main things to note in the terraform/resources/main.tf file:

source - defines the module to use

module - defines the resources to create

The use of data.azurerm_resource_group.existing.location as well as data.azurerm_resource_group.existing.name to get the location and name of the resource group

The use of depend_on to ensure that the resources are created before the module is executed

Notice the use of $(acrServer) and $(acrUsername) and $(acrPassword) in the container_registry_server, container_registry_username and container_registry_password respectively.

These variables are defined in Pipelines. Since this information is sensitive, we are hiding it in the codebase and storing these secrets in pipeline variable groups/secrets

Let's take a look at the contents below:

    # main.tf
    # Here we define the resources to use for this project

    # Defining the network security group

    module "network_security_group" {
      source              = "../modules/network_security_group"
      name                = "project${var.environment}nsg"
      location            = data.azurerm_resource_group.existing.location
      resource_group_name = data.azurerm_resource_group.existing.name

      rules = [
        {
          name                       = "nsg-rule-1"
          priority                   = 100
          direction                  = "Inbound"
          access                     = "Allow"
          protocol                   = "*"
          source_port_range          = "*"
          destination_port_range     = "*"
          source_address_prefix      = "*"
          destination_address_prefix = "*"
        },
        {
          name                       = "nsg-rule-2"
          priority                   = 101
          direction                  = "Outbound"
          access                     = "Allow"
          protocol                   = "*"
          source_port_range          = "*"
          destination_port_range     = "*"
          source_address_prefix      = "*"
          destination_address_prefix = "*"
        }
      ]
      depends_on = [data.azurerm_resource_group.existing]
      tags       = merge(var.tags)
    }

    # Defining the virtual network to use in resources

    module "virtual_network" {
      source              = "../modules/virtual_network"
      name                = "project${var.environment}vnet"
      location            = data.azurerm_resource_group.existing.location
      resource_group_name = data.azurerm_resource_group.existing.name
      address_space       = ["10.0.0.0/16"]
      depends_on          = [data.azurerm_resource_group.existing, module.network_security_group.this]
      tags                = merge(var.tags)
    }

    # Defining the subnet that will be used to create resources under, later on.

    module "subnet" {
      source                     = "../modules/subnet"
      name                       = "project${var.environment}subnet"
      resource_group_name        = data.azurerm_resource_group.existing.name
      virtual_network_name       = module.virtual_network.virtual_network_name
      subnet_address_prefix      = ["10.0.1.0/24"]
      service_endpoints          = ["Microsoft.Storage", "Microsoft.Web"]
      delegation_name            = "delegation"
      service_delegation_name    = "Microsoft.App/environments"
      service_delegation_actions = ["Microsoft.Network/virtualNetworks/subnets/join/action", "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action"]
      depends_on                 = [data.azurerm_resource_group.existing, module.virtual_network.this, module.network_security_group.this]
    }

    # Defining the container app environment, notice the use of module.subnet.subnet_id , this is how we can reference the subnet_id from the subnet module.

    module "container_app_environment" {
      source                         = "../modules/container_app_environment"
      resource_group_name            = data.azurerm_resource_group.existing.name
      location                       = data.azurerm_resource_group.existing.location
      name                           = "project-${var.environment}-cntr-env"
      log_analytics_workspace_id     = module.log_analytics_workspace.log_analytics_workspace_id
      infrastructure_subnet_id       = module.subnet.subnet_id
      internal_load_balancer_enabled = false
      depends_on                     = [data.azurerm_resource_group.existing, module.subnet.this]
      tags                           = merge(var.tags)
    }

    # Defining the container registry

    module "container_registry" {
      source                           = "../modules/container_registry"
      resource_group_name              = data.azurerm_resource_group.existing.name
      location                         = data.azurerm_resource_group.existing.location
      name                             = "project${var.environment}cr"
      sku                              = "Standard"
      is_admin_enabled                 = true
      is_public_network_access_enabled = true
      depends_on                       = [data.azurerm_resource_group.existing, module.key_vault.this]
      tags                             = merge(var.tags)
    }

    # Defining the container apps that will be created under the container app environment created earlier

    module "container_app" {
      source                       = "../modules/container_app"
      resource_group_name          = data.azurerm_resource_group.existing.name
      container_app_environment_id = module.container_app_environment.Environment_ID
      container_registry_server    = "$(acrServer)"
      container_registry_username  = "$(acrUsername)"
      container_registry_password  = "$(acrPassword)"
      container_apps = [

        # Notice the use of $(containerAppSecretKey) and $(containerAppSecretValue) in the secret_name and secret_value respectively

        {
          name                       = "containerapp1-${var.environment}"
          image                      = "mcr.microsoft.com/azuredocs/containerapps-helloworld:latest"
          cpu                        = 0.25
          memory                     = "0.5Gi"
          target_port                = 8080
          transport                  = "http2"
          external_enabled           = true
          allow_insecure_connections = false
          secret_name                = "$(containerAppSecretKey)"
          secret_value               = "$(containerAppSecretValue)"
        },
        {
          name                       = "containerapp2-${var.environment}"
          image                      = "mcr.microsoft.com/azuredocs/containerapps-helloworld:latest"
          cpu                        = 0.25
          memory                     = "0.5Gi"
          target_port                = 8080
          transport                  = "auto"
          external_enabled           = true
          allow_insecure_connections = false
          secret_name                = "$(containerAppSecretKey)"
          secret_value               = "$(containerAppSecretValue)"
        }
      ]
      depends_on = [data.azurerm_resource_group.existing, module.container_app_environment.this, module.container_registry.this]
      tags       = merge(var.tags)
    }

    # Defining the network security group association

    module "subnet_nsg_association" {
      source                    = "../modules/subnet_network_security_group_association"
      subnet_id                 = module.subnet.subnet_id
      network_security_group_id = module.network_security_group.id
      depends_on                = [data.azurerm_resource_group.existing, module.subnet.this, module.network_security_group.this]
    }
Enter fullscreen mode Exit fullscreen mode

In the block below, are the contents of output.tf, which contains all the outputs we want to get when the terraform code is run in the terminal / pipeline.

This can include details such as IPs of services being created, fqdns, etc.

One thing to keep in mind is, since we are using a modular based approach, these outputs must first be exported from an output.tf file inside the module itself, before the implementation of the module that actually outputs it during the run.

    # outputs.tf
    # Here we define the outputs to use for this directory

    # Container App Environment

    output "container_app_environment_default_domain" {
      value = module.container_app_environment.Default_Domain
    }

    output "container_app_environment_docker_bridge" {
      value = module.container_app_environment.Docker_Bridge_CIDR
    }

    output "container_app_environment_environment_id" {
      value = module.container_app_environment.Environment_ID
    }

    output "container_app_environment_static_ip_address" {
      value = module.container_app_environment.Static_IP_Address
    }

    # Container Apps

    output "container_app_latest_fqdn" {
      value = module.container_app.Latest_Revision_Fqdn
    }

    output "container_app_outbound_ips" {
      value = module.container_app.Outbound_Ip_Addresses
    }

    # Container Registry

    output "container_registry_id" {
      value = module.container_registry.id
    }

    output "container_registry_sku" {
      value = module.container_registry.sku
    }

    output "container_registry_registry_server" {
      value = module.container_registry.registry_server
    }

    output "container_registry_admin_enabled" {
      value = module.container_registry.admin_enabled
    }

    output "container_registry_admin_username" {
      value     = module.container_registry.admin_username
      sensitive = true
    }

    output "container_registry_admin_password" {
      value     = module.container_registry.admin_password
      sensitive = true
    }

Enter fullscreen mode Exit fullscreen mode

In the next block, are the contents of provider.tf

We have used skip_provider_registration = true to skip the provider registration, as sometimes during a Pipeline run, it can causes issues if terraform checks for registered providers.

Furthermore, here we define the minimum version of the provider we are using as well as the required version of the Terraform CLI.

    # provider.tf
    # Here we define the providers to use for this directory

    provider "azurerm" {
      features {}
      skip_provider_registration = true
    }

    terraform {
      required_version = ">= 1.7.5"
      required_providers {
        azurerm = {
          source  = "hashicorp/azurerm"
          version = ">=3.96.0"
        }
      }
    }


Enter fullscreen mode Exit fullscreen mode

In the next block, are the contents of tfvars/dev.tf
As discussed before, sensitive values are stored in pipeline variable groups/secrets. In this case sql_administrator_login and sql_administrator_login_password.

    # tfvars/dev.tf
    # This file defines the variables to use for this project for the dev environment

    project_name                 = "Project"
    environment                  = "dev"
    administrator_login          = "$(sql_administrator_login)"
    administrator_login_password = "$(sql_administrator_login_password)"

Enter fullscreen mode Exit fullscreen mode

Similarly:

    # tfvars/eun_region.tf

    region_name  = "northeurope"
    region_short = "eun"

Enter fullscreen mode Exit fullscreen mode

In the last variables file, are the contents of tfvars/tags.tf, which defines the tags to be applied to the resources. Back in the main.tf file, we used this tags = merge(var.tags) key-value pair approach to define the tags.

    # tfvars/tags.tf

    tags = {
        ServiceName    = ""
        Department     = "Cloud"
        Environment    = "dev"
        SubEnvironment = "nonProd"
        SystemName     = ""
    }

Enter fullscreen mode Exit fullscreen mode

And finally the variables file that contains the variables and their types

    # variables.tf

    variable "region_short" {
      type        = string
      description = "Short name of region used in project"
    }

    variable "region_name" {
      type        = string
      description = "Long name of region used in project"
    }

    variable "project_name" {
      description = "Project name"
    }

    variable "environment" {
      type        = string
      description = "Environment name"
    }

    variable "tags" {
      type = map(string)
      default = {
        ServiceName    = ""
        Department     = ""
        Environment    = ""
        SubEnvironment = ""
        SystemName     = ""
      }
    }

    variable "administrator_login" {
      type        = string
      description = "Administrator login"
      sensitive   = true
    }

    variable "administrator_login_password" {
      type        = string
      description = "Administrator login password"
      sensitive   = true
    }

Enter fullscreen mode Exit fullscreen mode

The Plan Stage

In this stage, we are done with defining Infrastructure as Code configurations, now we need Terraform to generate an execution plan based on these configurations and existing infrastructure, describing the changes it will make.

Before we go ahead and generate a plan, it is considered a good practice to make sure your code is valid and there are no syntax errors, or referencing errors.

This can be done by switching to your trusty CLI.. again, and running:


# This command validates your code and makes sure it's good to go

terraform validate

# And it's just as easy to make your code look much more cleaner with just one command again

terraform fmt --recursive

Enter fullscreen mode Exit fullscreen mode

This terraform fmt --recursive command formats the current directory as well the child directories and all .tf and .tfvars files, and properly indents all code.

Below we can see our code is valid!

Image description

Finally we can generate a plan by running a simple command

# Command to generate plan

terraform plan -o dev.tfplan

# In our case, we are using tfvars file in a directory called tfvars/, now we will need to modify the command a little bit to get the same result

terraform plan -var-file=tfvars/dev.tfvars -var-file=tfvars/eun_region.tfvars -var-file=tfvars/tags.tfvars -out=dev.tfplan
Enter fullscreen mode Exit fullscreen mode



Sample output can such as:


# module.subnet.azurerm_subnet.this will be updated in-place
  ~ resource "azurerm_subnet" "this" {
        id                                             = "/subscriptions/GUID_HERE/resourceGroups/project-dev/providers/Microsoft.Network/virtualNetworks/project-vnet-dev/subnets/project-subnet-dev"
        name                                           = "project-subnet-dev"
        # (10 unchanged attributes hidden)

      ~ delegation {
            name = "project-subnet-delegation-dev"

          ~ service_delegation {
              ~ actions = [
                    "Microsoft.Network/virtualNetworks/subnets/join/action",
                  + "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action",
                ]
                name    = "Microsoft.App/environments"
            }
        }
    }

Plan: 1 to add, 1 to change, 0 to destroy.

Changes to Outputs:
  ~ storage_account_name                        = "projectblobdev" -> "project1blobdev"
  + storage_account_primary_key                 = (sensitive value)


Enter fullscreen mode Exit fullscreen mode

These changes are basically compared with a .tfstate file that can exist in your local machine or on the cloud, hosted in a s3 bucket or blob storage.

In our example above, we used a blob storage service to host the tfstate file.

The Apply Stage

Upon approval of the plan generated in the last step, Terraform applies the proposed changes in the correct order, respecting resource dependencies.


# Apply command for deploying the infrastructure
terraform apply dev.tfplan

Enter fullscreen mode Exit fullscreen mode

This will then finally, deploy your infrastructure to the cloud. You can however, if needed, destroy the infrastructure when done with your usecase.


# Destroy it all by:

terraform destroy -var-file=tfvars/dev.tfvars -var-file=tfvars/eun_region.tfvars -var-file=tfvars/tags.tfvars

Enter fullscreen mode Exit fullscreen mode

All in all, It’s a powerful tool for managing infrastructure, allowing you to track changes, maintain consistency, and avoid manual errors.

And that's not all, at the same time it also ensures, controlled costs in cloud infrastructure, since you can easily create and destroy infrastructure with simple commands.

This automation can be taken further with Pipelines and gitflow that triggers based on branch that reflects a certain environment.. but that's a topic for another day :D

You can find the source code by clicking Here

I hope this article was a fun read and helped you gain some deeper insights into terraform modules and best practices.

Thankyou for the read!

Top comments (3)

Collapse
 
whimsicalbison profile image
Jack

Thanks for writing this article! I hadn't heard of Bicep before, so I looked it up and discovered that it's another Infrastructure as Code tool used in Azure. If you're familiar with AWS, is this similar to AWS's CloudFormation tool?

It appears that Bicep is being used to create resources for remotely holding the Terraform state and locks. I'd be interested to see how Bicep is run, and it might be helpful to include a discussion about remote state and locks in your article.

Collapse
 
hassan_aftab profile image
Hassan Aftab

Thanks for the read! Bicep is Azure’s version of Cloudformation. I have used Bicep to spin out storage account and a blob, purpose for doing that was, so i wouldn't need to worry about tfstate for the blob that would manage tfstates for the actual project.

My current bicep code requires a resource group to be created. Then bicep creates the blob inside. After which terraform is ready to take off. 😊

I will update this later on , how my pipelines handle this use case.

Collapse
 
whimsicalbison profile image
Jack

Looking forward to reading it!