DEV Community

Cover image for Best Practices for Building Multi-Cloud Modules in Terraform
Jeiman Jeya
Jeiman Jeya

Posted on

Best Practices for Building Multi-Cloud Modules in Terraform

Organisations today are increasingly adopting cloud services from different providers to meet their business needs. However, managing multi-cloud infrastructure can be challenging, especially when it comes to infrastructure management, which is where Terraform comes in. With Terraform, you can utilise modules to build a comprehensive, multi-cloud configuration.

In this post, we will cover the best practices for creating Terraform module configuration that is robust and reusable for large-scale, multi-cloud architecture. With this knowledge, you can confidently build and deploy infrastructure with ease.

The Gold Standard - Best Practices for Building Cloud/Multi-Cloud Modules

  • Build modules to be extremely and easily reusable.
  • Create cloud-agnostic modules that support AWS and Azure for multi-cloud architecture.
  • Avoid hard-coded or semi-hard-coded values at the module level; instead, use input arguments.
  • Use conditional expressions and feature toggles to create comprehensive and flexible modules.
  • Group common resources into the same folders.

The Foundation - The Folder Structure

It all starts with the folder structure. How we structure it will determine how we can further automate our IaC workflows.

The folder structure can be broken down into 2 sections:

  • Modules: Allows you to create logical abstractions on top of a set of resources. In other words, it lets you group resources together and reuse this group later, possibly multiple times. Once you create these modules, they become “child modules”, which can then be referenced in configuration files.
  • Patterns: These are repeated configuration files that derive from defining child modules. They are also referred to as “root modules”.

Terraform modules

The photo above depicts a Terraform module grouping resources together.

Philosophy on grouping similar and common resources into the same folders

It's best to store resources that share a common infrastructure resource category in the same folder. For example, consider all elements of networking, such as VPCs, subnets, security groups/firewalls, network security groups, load balancers, key vaults, and secrets. All of these either belong to the same underlying cloud category or they form the foundation of your cloud architecture. To keep things organized, it's recommended to group all of these resources in a folder labelled "foundation". As the name implies, they are the essential building blocks for any cloud architecture you set up for a company.

Use case example: Your company needs to set up the foundation for your cloud architecture and at the same time create a Kubernetes cluster with various node pools and settings that does a dependency reference of the foundation resources. This is followed by creating SQL databases, blob storages/S3 buckets, and more that will be used across the company. There are plans to introduce future projects that may not have the same folder structure as the current ones but will reside in a separate folder that uses the same Terraform modules.

IaC stages

These folders can be called "stages" because we need to create certain resources in a specific order. Therefore, stages have sequences, which are like orchestration steps. You need to execute items 1 to 10 in a particular order. Folder prefixes can be addressed as {sequence}_{stage}. They correspond to the following structure:

  • sequence - order of which to implement the Terraform resources - 01, 02, 03
  • stage - corresponds to the infrastructure category - foundation, common, kubernetes

Therefore, the folders can be organised in the following manner:

  • 01_foundation - contains all networking resources, followed by load balancers, CDNs, key vaults, and secrets
  • 02_kubernetes - contains all Kubernetes-related resources that does a dependency reference to networking resources from the 01_foundation folder
  • 03_common - contains all the “common” resources that are usually created in any tech company - S3 buckets, blob storages, NoSQL databases, SQL databases, and more
  • 04_custom_project_name - contains custom resources that need to be isolated for better management on special or custom projects

With stages, you can expand the folders to meet your engineering needs for provisioning new resources that are tied to a specific project. This allows teams to group resources together in the same folder, resulting in quicker execution of Terraform commands and faster results.

Visual representation of the folder structure for AWS Cloud

The visual representation below depicts the idea behind structuring your Terraform configuration into a "patterns" folder. As mentioned above, a "patterns" folder contains Terraform configurations that are repeated and injected into child modules for use. With this approach, you can create several of the same resources with different naming conventions and settings to meet your technical requirements.

terraform/
  patterns/
    iac/
     test/          
       01_foundation/
         aws-vpc.tf
         aws-subnets.tf
         aws-cdn.tf
         aws-secrets-manager.tf
       02_kubernetes/
         aws-eks.tf
         aws-kubernetes-cluster.tf
         .....
       03_common/
         aws-s3.tf
         aws-rds.tf
         aws-dynamodb.tf
       ..... 
   modules/
    aws/
     aws-vpc/
       main.tf
           output.tf
           variables.tf
         aws-s3/
           main.tf
           output.tf
       variables.tf
      ....
Enter fullscreen mode Exit fullscreen mode

The folder structure mentioned above is just the beginning. We will further expand on this structure later in this post.

The Modularity - Building modules to be re-usable

To build a robust child module that can be reused multiple times without manipulation, start your design process from the root module. The role of a child module is to simply process input arguments and create the necessary resources. However, some teams have needed to create custom child modules for specific requirements, which can cause problems later on. Improving the functionality of a child module in this design pattern can affect existing resources created. This issue can be easily avoided if you appropriately create those data properties/objects in the root module.

Method 1 - Use conditional expressions and feature toggles from the beginning

Regardless of the outcome of your overall IaC configuration for your company, always keep the "feature toggle" concept in mind when building your modules from the beginning. This means creating resources with a toggle switch, which provides flexibility in deleting and re-creating said resources easily from a root module instead of a child module. Your child module can also have feature toggles, but they should use conditional expressions in combination with the root module.

For example, suppose you need to create two different types of S3 buckets: one with public access and the other with private access. You can use the following feature toggle logic in your root module, and a conditional expression in the child module.

Root module

# Root module

module "project-zeus-s3-bucket-public" {
  source = "./aws/aws-s3-bucket"
  bucket_name = "public-app-bucket"
  is_private = false
}

module "project-zeus-s3-bucket-private" {
  source = "./aws/aws-s3-bucket"
  bucket_name = "private-app-bucket"
  is_private = true
}
Enter fullscreen mode Exit fullscreen mode

Child module

# Child module

resource "aws_s3_bucket" "bucket" {
  bucket = var.bucket_name

  tags = {
    Name = var.bucket_name
  }
}

resource "aws_s3_bucket_acl" "bucket_acl" {
  bucket = aws_s3_bucket.bucket.id
  acl    = var.is_private == true ? "private" : "public-read"
}
Enter fullscreen mode Exit fullscreen mode

As you can see, we’re parsing in the is_private argument from the root module and the child module is performing a conditional expression check to see if the boolean value is set to true or false accordingly. With that condition, we will set the Access Control List for the bucket to either private or public accordingly.

Method 2 - Parsing all data from the root module into the child module

Imagine your child module is designed to only receive orders from the root module and simply create resources without manipulating the data. However, you can still achieve the same outcome as above by parsing all argument data types from the root module. The child module can then interpret the data types.

Root module

# Root module

module "s3-bucket-public" {
  source      = "./aws/aws-s3-bucket"
  bucket_name = "public-app-bucket"
  is_private  = false
  acl         = "public-read"
  policy      = file("policy.json")
  website     = {
    index_document = "index.html"
    error_document = "error.html"

    routing_rules = <<EOF
      [{
      "Condition": {
         "KeyPrefixEquals": "docs/"
      },
      "Redirect": {
          "ReplaceKeyPrefixWith": "documents/"
      }
      }]
    EOF
  }
}
Enter fullscreen mode Exit fullscreen mode

Child module

# Child module
...
....

resource "aws_s3_bucket" "bucket" {
  bucket = var.bucket_name
  policy = var.policy

  website {
    index_document = var.website.index_document
    error_document = var.website.error_document
    routing_rules  = var.website.routing_rules
  }

  tags = {
    Name        = var.bucket_name
  }
}

resource "aws_s3_bucket_acl" "bucket_acl" {
  bucket = aws_s3_bucket.bucket.id
  acl    = var.acl
}
Enter fullscreen mode Exit fullscreen mode

To summarise, any of these approaches will suffice to ensure that your child modules are reusable on a large scale.

The Jack of All Trades - Building an IaC stack to be Cloud-agnostic

By following the aforementioned steps and methods, your organisation can easily adopt a multi-cloud architecture. As previously mentioned, it's the structure of your folders that enables this approach. You still adhere to the same common resources folder concept as mentioned above and simply create new folders for each cloud provider you would like to adopt in your IaC stack.

Root module: visual representation

terraform/
  patterns/
    iac/
    test/
       01_foundation/
          aws/
        aws-vpc.tf
            aws-subnets.tf
            aws-cdn.tf
            aws-secrets-manager.tf
          azure/
        az-vnet.tf
            az-subnets.tf
            az-cdn.tf
            az-key-vault.tf
          gcp/
        gcp-vpc.tf
            gcp-subnets.tf
            gcp-cdn.tf
            gcp-key-management.tf
Enter fullscreen mode Exit fullscreen mode

Child module: visual representation

terraform/
    modules/
    aws/
       aws-vpc/
          main.tf
              output.tf
          variables.tf
           aws-s3/
              main.tf
          output.tf
          variables.tf
    azure/
       azure-vnet/
          main.tf
              output.tf
          variables.tf
           azure-storage-account/
              main.tf
          output.tf
          variables.tf
    .....
Enter fullscreen mode Exit fullscreen mode

As you can see, it's easy to adopt any cloud provider of your choice into your IaC stack if the folders are structured properly.

Folder breakdown

Patterns

  1. iac - represents Terraform
  2. test - represents the environment you’re provisioning the resources in. You may have entirely different resources for uat and prod environments. Therefore it’s best to keep them separate for feasibility.
  3. 01_stage - the common resource folders for your root modules to share
  4. [provider-resource].tf - containing all of your root modules for that infrastructure category

The Automaton - Build a single multi-cloud, multi-environment pipeline

Based on the discussion above, we can now build a robust and modular pipeline to support running Terraform on any cloud provider and environment. In this example, we'll be using Azure DevOps. However, the same principles can be applied to any CI/CD toolset.

To cater to a multi-cloud and multi-environment architecture, it's best to build a modular and reusable pipeline. This involves creating a parent pipeline that invokes a template pipeline to execute Terraform commands using template parameters.

Parent pipeline

With a parent pipeline, you can define all the values and content that need to be parsed into the template or child pipeline as recognised input arguments.

# iac automation - multi-stage ci-cd pipeline

name: $(BuildDefinitionName)-$(Build.BuildId)-$(date:yyyyMMdd)$(rev:.r)

pool:
  vmImage: "ubuntu-latest"

parameters:
  - name: iacEnvironment
    displayName: IAC Environment (test, prod)
    type: string
    default: test
    values:
      - test
      - prod
  - name: cloudProvider
    displayName: Cloud Provider (aws, azure)
    type: string
    default: aws
    values:
      - aws
      - azure

variables:
  - template: ../../variables/generic-variables.yml
  - ${{ if eq(parameters.iacEnvironment, 'test') }}:
      - template: ../../variables/test-variables.yml
  - ${{ if eq(parameters.iacEnvironment, 'prod') }}:
      - template: ../../variables/prod-variables.yml
  - template: ../../variables/foundation-variables.yml

trigger:
  branches:
    include:
      - master
  paths:
    include:
      - "azure-devops/iac/pipelines/01-foundation-pipeline.yml"
      - "terraform/patterns/iac/test/aws/01_foundation/**"
      - "terraform/patterns/iac/test/azure/01_foundation/**"

pr: none

stages:
  - stage: planning_stage
    displayName: Planning
    jobs:
      - template: ../../templates/planning-template.yml
        parameters:
          terraformarguments: >-
            $(terraformarguments)
          environment: ${{ parameters.iacEnvironment }}
          sequence: $(sequence)
          stage: $(stage)
          cloud: ${{ parameters.cloudProvider }}

  - stage: deployment
    dependsOn: planning_stage
    condition: and(succeeded('planning_stage'))
    displayName: Deployment
    jobs:
      - template: ../../templates/deployment-template.yml
        parameters:
          terraformarguments: >-
            $(terraformarguments)
          environment: ${{ parameters.iacEnvironment }}
          sequence: $(sequence)
          stage: $(stage)
          cloud: ${{ parameters.cloudProvider }}
Enter fullscreen mode Exit fullscreen mode

Parent pipeline breakdown

The goal is to create a single pipeline that can achieve a multi-cloud and multi-environment approach to running Terraform commands. However, there may be edge cases where you need to define separate pipelines for each environment. In the example above, we're using a single pipeline approach.

NOTE: This post will not go into too much detail about the components in the parent pipeline, as they differ from one CI/CD toolset to another. However, the general concept can be inferred from your understanding and experience of using CI/CD toolsets.

Template Parameters

We are utilising Azure DevOps' native process called Parameters. These are essentially template parameters that allow us to define values that will be used at runtime. For the example above, we’re specifying 2 parameters:

  • IaC environment
  • Cloud provider

With these two values, we can dynamically invoke the corresponding root modules in Terraform to create, update, or delete our resources.

Variable Groups

We have organised the necessary IaC stages and environment details into user-friendly and easily identifiable groups:

  • generic-variables.yml - contains all of the generic information, such as shared terraform secrets, and shared credentials
  • test-variables.yml - contains all of the environment-specific values, such as Terraform backend storage names (S3 bucket or Azure storage account names)
  • foundation-variable.yml - contains all of the IaC stage information such as Terraform arguments, stage folders to load (foundation, common, ecs, kubernetes), location, and more for that specific IaC stage

Path Triggers

As shown in the example above, we added AWS and Azure folder paths to ensure the pipeline triggers. However, since we are using template parameters, manual pipeline invocation is required to deploy to other non-default environments and cloud providers mentioned. Path triggers differ in each CI/CD toolset. Therefore, you could technically provide a wildcard to search for all cloud providers for the foundation IaC stage.

Pipeline Stages / Jobs

This section is where all of your template parameters and variable groups are injected to be utilised by the template pipeline.

Template pipeline or child pipeline

As the name suggests, templates allow you to define reusable content, logic, and parameters into smaller parts that are easier to understand.

Parameters and variables parsed from the parent pipeline can be processed to dynamically set the folders for your IaC stage and cloud provider. The following example demonstrates a step in a job for executing a Terraform plan. Depending on your CI/CD toolset and the marketplace plugins you use, you can write conditional logic to switch steps based on the cloud provider.

- task: TerraformCLI@0
  displayName: "Terraform Plan"
  name: tfPlanOutput
  inputs:
    command: "plan"
    workingDirectory: "$(System.DefaultWorkingDirectory)/terraform/patterns/iac/$(environment)/${{ parameters.cloud }}/${{ parameters.sequence }}_${{ parameters.stage }}"
    providerServiceAws: "$(aws_service_connection)"
    providerAwsRegion: "$(location)"
    commandOptions: '-out="$(System.DefaultWorkingDirectory)/terraform/patterns/iac/$(environment)/${{ parameters.cloud }}/${{ parameters.sequence }}_${{ parameters.stage }}/terraform.tfplan" -detailed-exitcode -var-file="$(System.DefaultWorkingDirectory)/terraform/patterns/iac/$(environment)/${{ parameters.sequence }}_${{ parameters.stage }}/values.tfvars" ${{ parameters.terraformarguments }}'
    allowTelemetryCollection: false
    publishPlanResults: "terraform_$(environment)_output_${{ parameters.sequence }}_${{ parameters.stage }}_plan"
Enter fullscreen mode Exit fullscreen mode

The Conclusion

In today's world, organisations are increasingly adopting cloud services from various providers to meet their business needs. As a result, multi-cloud infrastructure is becoming more common, and managing it efficiently is becoming more challenging. Terraform is a popular tool that simplifies the process of infrastructure management by providing a declarative language for defining infrastructure as code (IaC).

In this context, building multi-cloud modules in Terraform requires a well-structured folder system, a focus on reusability, and a cloud-agnostic IaC stack. By following the steps outlined in this post, you can make your Terraform configuration more modular, easier to maintain, and more adaptable to changes in your cloud infrastructure.

Resources

Top comments (0)