We're reinventing payments. Super powers free payments for businesses and more rewarding shopping for customers, so that everyone wins. https://www.superpayments.com/
Like most startups, we use Terraform to manage and deploy our infrastructure. This post covers how we use Terraform modules at Super to adhere to the DRY principle.
Early in our Terraform refactor, we aimed to invest in modules. Our goal was to promote high reusability while minimising code.
At the time of writing, Super has around 70 Terraform modules in use across 10 providers. Some of the modules are small (e.g. IAM Role) and some are larger (e.g. EKS Cluster).
Template Module & Code Style 📝
In order to keep module creation in line with a style guide we have a template module. Some of the rules below are best practice and some are specific to Super.
- We don't include provider configurations
- We don't include any backend configuration
-
data.tf
file is used for alldata
resources -
outputs.tf
file is used for all output resources -
variables.tf
file is used for all variables -
versions.tf
file is used forrequired_providers
andrequired_version
Why no provider?! 😱
The primary reason we avoid including a provider in our modules is to facilitate nesting modules. Nesting modules can be beneficial to keep resources used in a standardised format across modules.
When using a module inside of a module Terraform deems it incompatible with count
, for_each
, and depends_on
if the module in question has its own local provider configuration.
We started out only removing the providers of modules nested, but decided that we can make use of Terragrunt's generate and include to remove providers from all modules.
Let's take the following directory structure for AWS as an example. We have a folder for the AWS region (eu-west-2) and we also have a few hcl files.
├── super-staging
│ ├── eu-west-2
│ ├── aws.hcl
│ ├── terragrunt.hcl
│ └── vault.hcl
The aws.hcl
file uses a Terragrunt generate block to arbitrarily generate a file in the terragrunt working directory (where terraform is called).
generate "aws" {
path = "aws.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
provider "aws" {
region = "eu-west-2"
default_tags {
tags = {
environment = "staging",
}
}
}
EOF
}
When using a module with Terragrunt you can then use the include block with the find_in_parent_folders
function.
include "aws" {
path = find_in_parent_folders("aws.hcl")
}
terraform {
source = "git@github.com:organisation/terraform-example-module.git?ref=v1.0.0"
}
Remote State
We use S3 as our state store along with DynamoDB for locking all encrypted with KMS.
The terragrunt.hcl
at the root of the directory includes three things. The terragrunt remote_state
block, iam_role
and some default inputs
.
remote_state {
backend = "s3"
generate = {
path = "backend.tf"
if_exists = "overwrite"
}
config = {
bucket = "super-staging-eu-west-2-example-bucket"
key = "${path_relative_to_include()}/terraform.tfstate"
region = "eu-west-2"
encrypt = true
dynamodb_table = "super-staging-eu-west-2-example-table"
kms_key_id = "alias/s3-super-staging-eu-west-2-example-kms"
disable_bucket_update = true
}
}
iam_role = "arn:aws:iam::<snip>:role/example-role"
inputs = {
environment = "staging"
aws_account_id = "<snip>"
service_owner = "devops"
}
We then add the include like we do with the AWS provider. By default find_in_parent_folders
will search for the first terragrunt.hcl
file.
include "root" {
path = find_in_parent_folders()
expose = true
}
Versioning 🔢
Our Platform team are enthusiasts of semantic versioning and we also use conventional commits.
We have a simple Github Action job on each module repository that uses the semantic-release-action
. We use the @semantic-release/commit-analyzer
plugin with the conventionalcommits
preset.
- name: Release
uses: cycjimmy/semantic-release-action@v4
with:
semantic_version: 23.0.2
extra_plugins: |
@semantic-release/changelog@6.0.3
@semantic-release/git@10.0.1
conventional-changelog-conventionalcommits@7.0.2
env:
GITHUB_TOKEN: ${{ secrets.CI_GITHUB_TOKEN }}
Top comments (0)