DEV Community

Cover image for Scaling Infrastructure Across Environments with Terraform
Amash Ansari
Amash Ansari

Posted on

Scaling Infrastructure Across Environments with Terraform

In this post, we'll explore how to set up Multi-Environment Infrastructure through a step-by-step project. We'll use modules to write organized and reusable code, allowing us to create multiple environments using a single setup. We'll enhance the security and reliability of our infrastructure by integrating our Terraform state file, terraform.tfstate with a remote backend (S3 bucket). In addition, we'll implement state locking and log the results in a DynamoDB table.

This post covers advanced Terraform concepts, so it's best if you have a good grasp of Terraform fundamentals and intermediate concepts.

Prerequisite

  • You should be familiar with AWS services like EC2, S3, and DynamoDB.
  • It's crucial to have a strong understanding of Terraform basics.
  • Ensure you have AWS CLI installed on your system. If it's not already installed, you can set it up by clicking here.

If you're new to Terraform basics, you can get started with this beginner-friendly guide by clicking here.

Exploring Project's Implementation

  1. We'll complete this project by writing clean and organized code and keeping related configurations together in a separate folder. This approach makes project understanding and management more efficient and speedy.
  2. We'll make a new folder to store module configurations. In this folder, we'll create modules for EC2, S3, DynamoDB, and a Variable module to manage configuration settings.
  3. We're doing this to write the configurations once and use them to create resources in various environments, such as development, production, and testing. The advantage is that we don't have to write separate configurations for each environment. With modular code, one configuration can create resources for multiple environments.
  4. Last but not least, we'll also create a main file where we'll put our discussed modules to use and create multiple infrastructure environments.

Image description

Creating Modules

  • Start by setting up a dedicated 'modules' folder to house all your modules. Open this folder with your code editor (I'll be using VS Code in this post). Then, create separate files with the .tf extension to define your EC2, S3 bucket, and DynamoDB table configurations.

Image description

  • Choose the variable configuration and begin writing the script as outlined below. This script will include the variables for our modules, which will be utilized in the resource configurations.
  # Variable for multi-environments
  variable "env" {
    description = "This is the environment value for multiple-environments of our infrastructure"
    type        = string
  }

  # Variable for giving ami
  variable "ami" {
    description = "This is the ami value for EC2"
    type        = string
  }

  # Variable for giving instance type
  variable "instance_type" {
    description = "This is the instance_type for our infrastructure"
    type        = string
  }
Enter fullscreen mode Exit fullscreen mode
  • Now, open the EC2 file and paste the following configuration. This will set up the EC2 instance for multi-environments.
  # EC2 Instance configuration for multi-environments
  resource "aws_instance" "ec2" {
    ami           = var.ami    # Using ami variable
    instance_type = var.instance_type # Using instance_type variable
    tags = {
      Name = "${var.env}-instance" # Using env variable to give instance name
    }
  }
Enter fullscreen mode Exit fullscreen mode
  • Next, open the S3 file and paste the following configuration. This will create the S3 bucket for multiple environments.
  # S3 bucket configuration for multi-environments
  resource "aws_s3_bucket" "module-bucket" {
    bucket = "${var.env}-amash-bucket" # Using env variable to give bucket name
    tags = {
      Name = "${var.env}-amash-bucket"
    }
  } # NOTE: Bucket name should be unique

Enter fullscreen mode Exit fullscreen mode
  • Afterward, open the DynamoDB file and insert the following configuration. This will establish the DynamoDB table for various environments.
# DynamoDB configuration for multi-environments
  resource "aws_dynamodb_table" "dynamodb-table" {
    name         = "${var.env}-table" 
    billing_mode = "PAY_PER_REQUEST" # Give billing mode
    hash_key     = "userID"
    attribute {
      name = "userID" # Name of table attribute
      type = "S" # Type of table attribute i.e. string
    }
    tags = {
      Name = "${var.env}-table" # Using env variable to give table name
    }
  }
Enter fullscreen mode Exit fullscreen mode

That's it for the modules. Now, we'll employ these modules in various infrastructure environments.

Creating Main Configurations

  • Begin by crafting separate .tf files to define your main configurations, such as Terraform settings, providers, and the backend. Here's how it should look:

Image description

  • Next, open the remote-backend.tf file and insert the following configuration. This will link the state file, terraform.tfstate with the remote backend. It will also implement state locking during updates and display logs in a tabular format.
  # Remote backend variable for S3 bucket
  variable "state_bucket_name" {
    default = "demoo-state-bucket"
  }

  # Remote backend variable for DynamoDB table
  variable "state_table_name" {
    default = "demoo-state-table"
  }

  # Variable for giving aws-region
  variable "aws-region" {
    default = "us-east-1"
  }

  # Backend resources for S3 bucket
  resource "aws_s3_bucket" "state_bucket" {
    bucket = var.state_bucket_name 
    tags = {
      Name = var.state_bucket_name # Using state_bucket_name variable to give a bucket name
    }
  }

  resource "aws_dynamodb_table" "state_table" {
    name         = var.state_table_name
    billing_mode = "PAY_PER_REQUEST" # Give any billing mode
    hash_key     = "LockID" # This key will serve as the lock for the infrastructure state

    attribute {
      name = "LockID" # Name of table attribute
      type = "S" # Type of table attribute i.e. string
    }
    tags = {
      Name = var.state_table_name # Using state_table_name variable to give a table name
    }
  }
Enter fullscreen mode Exit fullscreen mode
  • Select the Terraform configuration and start scripting it as described below. This script defines the high-level behavior of the Terraform configuration.
  # Terraform block
  terraform {
    required_providers {
      aws = {
        source  = "hashicorp/aws"
        version = "~> 5.0"
      }
    }

  # Remote backend configuration
    backend "s3" {
      bucket         = "demoo-state-bucket" # S3 Bucket name
      key            = "terraform.tfstate" # File that we intend to store remotely
      region         = "us-east-1" # Give any region
      dynamodb_table = "demoo-state-table" # DynamoDB table name
    }
  }
Enter fullscreen mode Exit fullscreen mode
  • After that, open the providers.tf file and insert the following configuration. This is used to specify the region for creating resources.
 # Provider configuration for selecting aws region
  provider "aws" {
    region = var.aws-region # Using aws-region variable from remote-backend file
  }

Enter fullscreen mode Exit fullscreen mode
  • At last, open the main file and add the following configuration. This file utilizes all the modules we've created so far to set up different environments of our infrastructure, including development, production, and testing.
# Created development environment of infrastructure
  module "dev" {
    source        = "./modules" # Giing path to the modules
    env           = "dev" # Passing the environment variable value, which we've defined in the 'var-module.tf' file within the 'modules' folder
    ami           = "ami-053b0d53c279acc90" # Passing the ami variable value, which we've defined in the 'var-module.tf' file within the 'modules' folder
    instance_type = "t2.micro" # Passing the instance_type variable value, which we've defined in the 'var-module.tf' file within the 'modules' folder
  }

  # Created production environment of infrastructure
  module "prod" {
    source        = "./modules"
    env           = "prod"
    ami           = "ami-053b0d53c279acc90"
    instance_type = "t2.micro"
  }

  # Created testing environment of infrastructure
  module "test" {
    source        = "./modules"
    env           = "test"
    ami           = "ami-053b0d53c279acc90"
    instance_type = "t2.micro"
  }
Enter fullscreen mode Exit fullscreen mode

Our next step is to run these scripts with Terraform and bring our hard work to a successful outcome.

Things to Remember

We're just a few steps away from making this project functional, but before that, there are some important points to grasp. Initially, we'll comment out the remote "s3" {} section in the terraform.tf file and apply the configuration using the terraform apply command. Following that, we'll uncomment the remote "s3" {} part in the terraform.tf file and apply the configuration again using terraform apply. This is necessary due to Terraform's behavior, which runs the backend configuration before any other configuration. Since we've used the S3 bucket and DynamoDB tables in our backend, if the backend uses these resources before their creation, Terraform will generate an error. To avoid this, we'll first create the resources like the S3 bucket and DynamoDB table, and then use them to store the state of our infrastructure.

Image description

Terraform in Action

  • Execute terraform init to initialize the folder with Terraform.

Image description

  • Next, run terraform validate to check the code syntax, and then execute terraform plan to preview the changes that will occur if you apply this configuration.

Image description

  • Finally, perform terraform apply to apply the configuration, and wait for Terraform to provision the multiple environments for you (Don't forget to comment out the remote "s3" {} part in the terraform.tf file).

Image description

  • After the changes have been successfully applied, uncomment the remote "s3" {} part in the terraform.tf file. Remember to execute terraform init first, because we've altered Terraform's behavior by adding a backend. Terraform will need to install the necessary plugins. Then, apply the latest configuration by executing the terraform apply command, as discussed above. Image description

After some time, you'll see all the EC2 instances, S3 buckets, and DynamoDB tables on the AWS console, provisioned with just a single click. With the addition of a remote backend and state locking, this showcases the power of Terraform. You can make infrastructure respond to your commands effortlessly.

Conclusion

We've embarked on a journey to establish Multi-Environment Infrastructure through a comprehensive step-by-step project. By leveraging modules, we've structured our code for reusability, simplifying the creation of multiple environments from a single configuration.

We've taken steps to enhance the security and reliability of our infrastructure by seamlessly integrating our Terraform state file (terraform.tfstate) with a remote backend hosted on an S3 bucket. Moreover, we've prioritized infrastructure stability by implementing state locking and recording logs in a DynamoDB table. This project showcases the power of Terraform, empowering us to manage infrastructure with precision and ease.

Here is the project link🔗 Click to access it.

Top comments (0)