DEV Community

Spike
Spike

Posted on

Migrate Terraform backend local to S3 and DynamoDB with multiple AWS accounts

Situation

You were the only developer in your company. You've managed AWS via Terraform and stored it in your local machine. As the company has grown up, you had to work with new colleagues. You needed to bring in a remote backend for several reasons; Collaboration, state management, disaster recovery, etc.

It's a widespread situation for early-stage startup companies. So, let's begin.

Multiple AWS accounts

Using multiple accounts often provides many benefits for managing AWS infrastructures. The one is dividing workloads into development and production. In addition, adding an account for common resources like secrets and config makes it better. There are lots of best practices, and you can see them from Organizing Your AWS Environment Using Multiple Accounts.

Prerequisite

I assume we have two OU(Organizational Units) and three accounts:

  • Fundamental(OU)
    • Infrastructure (account_id: 999999999999)
  • Workloads(OU)
    • development (account_id: 111111111111)
    • production (account_id: 222222222222)

account_id is generated by AWS

It's not a recommendation but a fundamental OU architecture. We use AWS Identity Center, originally AWS SSO, to ease login to AWS Console and AWS CLI. This article is not about AWS Identity Center so we will skip it. But don't worry, if you are using IAM User, there is no problem to read.

Let's dive in

From now, assume we have terraform project like:

$ tree . -d
.
├── global
│   ├── 01-network
│   └── 20-ecs
└── services
    ├── service-a
    ├── service-b
    └── service-c
Enter fullscreen mode Exit fullscreen mode

Create new terraform project

Create a directory, init, for S3 and DynamoDB and initializing it.

$ mkdir global/init
$ cd global/init
$ echo 'provider "aws" {}' > main.tf
$ terraform init
Enter fullscreen mode Exit fullscreen mode

You can see the message like "Terraform has been successfully initialized".

Configure AWS SSO profiles

Create three profiles, dev, prod, infra via AWS CLI. See details in AWS CLI Configure SSO.

$ aws sso configure
Enter fullscreen mode Exit fullscreen mode

You can see profiles via:

$ aws configure list-profiles
dev
prod
infra
Enter fullscreen mode Exit fullscreen mode

Create S3 and DynamoDB

We are going to create S3 bucket and DynamoDB table in infra profile. Modify main.tf

# main.tf
provider "aws" {
  region  = "ap-northeast-2"  # rewrite your region
  profile = "infra"  # You must use `infra` profile which is in Fundamental OU
}

locals {
  # Use other ways to hide sensitive info e.g. `terraform.tfvars`
  principals = {
    dev  = "arn:aws:iam::111111111111:role/aws-reserved/sso.amazonaws.com/ap-northeast-2/AWSReservedSSO_AdministratorAccess_dev"
    prod = "arn:aws:iam::222222222222:role/aws-reserved/sso.amazonaws.com/ap-northeast-2/AWSReservedSSO_AdministratorAccess_prod"
    infra = "arn:aws:iam::999999999999:role/aws-reserved/sso.amazonaws.com/ap-northeast-2/AWSReservedSSO_AdministratorAccess_infra"
  }
}

resource "aws_s3_bucket" "tfstate" {
  bucket = "tfstate-bucket"
}

resource "aws_s3_bucket_versioning" "this" {
  bucket = aws_s3_bucket.tfstate.id

  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_dynamodb_table" "tfstate_lock" {
  name         = "tfstate-lock"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"

  attribute {
    name = "LockID"  # Same as hash_key above
    type = "S"
  }
}

resource "aws_s3_bucket_policy" "this" {
  bucket = aws_s3_bucket.tfstate.id
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect   = "Allow"
        Action   = "s3:ListBucket"
        Resource = "arn:aws:s3:::${aws_s3_bucket.tfstate.bucket}"
        Principal = {
          AWS = [for _, v in local.principals : v]
        }
      },
      {
        Sid      = "AllowAllS3ActionsInDev"
        Effect   = "Allow"
        Action   = ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"]
        Resource = "arn:aws:s3:::${aws_s3_bucket.tfstate.bucket}/services/dev/*"
        Principal = {
          AWS = local.principals.dev
        }
      },
      {
        Sid      = "AllowAllS3ActionsInProd"
        Effect   = "Allow"
        Action   = ["s3:GetObject", "s3:PutObject", "s3:DeleteObject"]
        Resource = "arn:aws:s3:::${aws_s3_bucket.tfstate.bucket}/services/prod/*"
        Principal = {
          AWS = local.principals.prod
        }
      }
    ]
  })
}

Enter fullscreen mode Exit fullscreen mode

We divided development and production into different paths.

  • arn:aws:s3:::${aws_s3_bucket.tfstate.bucket}/services/dev/*
  • arn:aws:s3:::${aws_s3_bucket.tfstate.bucket}/services/prod/*

Also, allow writing actions(s3:PutObjcet, etc.) for own paths. It prevents updating other tfstate which is not own.

Update legacy resource, service-a

Suppose we have an initialized terraform project and terraform file, services/service-a/versions.tf:

# services/service-a/versions.tf
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.65.0"
    }
  }
}

provider "aws" {
  region = "ap-northeast-2"
}
Enter fullscreen mode Exit fullscreen mode

It's an old one. Before modifying the file, we are going to create workspaces for SSOT. Create Terraform workspaces dev and prod.

$ terraform workspace new dev
$ terraform workspace new prod
Enter fullscreen mode Exit fullscreen mode

You can see workspace list via:

$ terraform workspace list
  default
  dev
* prod
Enter fullscreen mode Exit fullscreen mode

FYI, default workspace cannot be deleted.

Let's modify services/service-a/versions.tf

# services/service-a/versions.tf
terraform {
  backend "s3" {
    bucket               = "tfstate-bucket"
    workspace_key_prefix = "services"  # default is `env:`
    key                  = "service-a/terraform.tfstate"
    region               = "ap-northeast-2"

    profile = "infra"
  }

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.65.0"
    }
  }
}

provider "aws" {
  region  = "ap-northeast-2"
  profile = terraform.workspace  # It should be one of `dev` and `prod`
}
Enter fullscreen mode Exit fullscreen mode

Finally, migration

Enter terraform init in terminal.

$ terraform init

Initializing the backend...
Do you want to migrate all workspaces to "s3"?
  Both the existing "local" backend and the newly configured "s3" backend
  support workspaces. When migrating between backends, Terraform will copy
  all workspaces (with the same names). THIS WILL OVERWRITE any conflicting
  states in the destination.

  Terraform initialization doesn't currently migrate only select workspaces.
  If you want to migrate a select number of workspaces, you must manually
  pull and push those states.

  If you answer "yes", Terraform will migrate all states. If you answer
  "no", Terraform will abort.

  Enter a value: 
Enter fullscreen mode Exit fullscreen mode

Enter yes and you just have migrate remote backend to S3!🎉

If you want to apply other workload, you just change terraform workspace.

$ terraform workspace select dev
Enter fullscreen mode Exit fullscreen mode

Top comments (0)