DEV Community

Lewis Stevens for AWS Community Builders

Posted on

Creating a credential-less Terraform pipeline

The main purpose of the series of posts is to go more in-depth about creating a Terraform pipeline that does not require any access keys to be stored in your GitHub, GitLab or BitBucket pipelines.


Prerequisites


Initial setup

We will require a state bucket so your Terraform repositories can read and write state changes on each execution, also an IAM user which can be used to run Terraform locally until other methods are implemented such as SSO or Active Directory.

So the first step will be logging into your AWS account, navigating to CloudFormation and adding the following template.

cloudformation.yml
AWSTemplateFormatVersion: 2010-09-09

Resources:
  TfBucket:
    Type: AWS::S3::Bucket
    Properties:
      VersioningConfiguration:
        Status: Enabled

  TfUser:
    Type: AWS::IAM::User
    Properties:
      Policies:

        - PolicyName: PermissionForOpenIdConnectModule
          PolicyDocument:
            Version: 2012-10-17
            Statement:

              - Effect: Allow
                Action:
                  - iam:*OpenIDConnectProvider
                Resource:
                  - Fn::Sub:
                      - 'arn:aws:iam::${AccountId}:oidc-provider/*'
                      - AccountId: !Ref AWS::AccountId

              - Effect: Allow
                Action:
                  - iam:*Role*
                Resource:
                  - Fn::Sub:
                      - 'arn:aws:iam::${AccountId}:role/identity-provider-github-assume-role'
                      - AccountId: !Ref AWS::AccountId
                  - Fn::Sub:
                      - 'arn:aws:iam::${AccountId}:role/identity-provider-gitlab-assume-role'
                      - AccountId: !Ref AWS::AccountId
                  - Fn::Sub:
                      - 'arn:aws:iam::${AccountId}:role/identity-provider-bitbucket-assume-role'
                      - AccountId: !Ref AWS::AccountId

        - PolicyName: PermissionToBucketState
          PolicyDocument:
            Version: 2012-10-17
            Statement:

              - Effect: Allow
                Action:
                  - s3:ListBucket
                Resource:
                  - !GetAtt TfBucket.Arn

              - Effect: Allow
                Action:
                  - s3:GetObject
                  - s3:PutObject
                  - s3:DeleteObject
                Resource:
                  - Fn::Sub:
                      - "${BucketArn}/*"
                      - BucketArn: !GetAtt TfBucket.Arn

  # Generates the Access Key
  TfAccessKey:
    Type: AWS::IAM::AccessKey
    Properties:
      # Increment serial to rotate key
      Serial: 1
      Status: Active
      UserName: !Ref TfUser

  # Adds the Credentials to Secrets Manager
  TfSecret:
    Type: AWS::SecretsManager::Secret
    Properties:
      Description: Credentials for Terraform.
      Name: TERRAFORM_CREDENTIALS
      SecretString: !Sub
        - '{"AWS_ACCESS_KEY_ID":"${AWS_ACCESS_KEY_ID}","AWS_SECRET_ACCESS_KEY":"${AWS_SECRET_ACCESS_KEY}","AWS_DEFAULT_REGION":"${AWS_DEFAULT_REGION}"}'
        - AWS_ACCESS_KEY_ID: !Ref TfAccessKey
          AWS_SECRET_ACCESS_KEY: !GetAtt TfAccessKey.SecretAccessKey
          AWS_DEFAULT_REGION: !Ref "AWS::Region"

Outputs:
  BucketName:
    Description: The name of the state bucket used for Terraform State.
    Value: !Ref TfBucket

  BucketRegion:
    Description: The region of the state bucket that will be used for Terraform State.
    Value: !Ref AWS::Region

Enter fullscreen mode Exit fullscreen mode

This will give you the bucket name and ARN as an output as we are letting the template autogenerate a name to prevent conflicts, however, you can use BucketName property to specify this in the template creation.


Project structure

We will be creating the project structure as below, we will start with creating the module.

├── backend.tf
├── main.tf
├── modules
│   └── openid_connect
│       ├── experiments.tf
│       ├── locals.tf
│       ├── main.tf
│       └── variables.tf
├── providers.tf
└── terraform.tfplan
Enter fullscreen mode Exit fullscreen mode

Creating the module

I have abstracted this into a module called openid_connect, this will create a separation between the rest of the code and improve readability.

This will be the first file we need to create this will contain the values which we want to pass from outside the module, we are using the Optional type constraint due to wanting to create our defaults with other variables later.

openid_connect/variables.tf
variable "github" {
  type = object({
    enabled        = bool
    workspace_name = string
    repositories   = optional(string)
    permission_statements = optional(list(
      object({
        Effect   = string
        Action   = list(string)
        Resource = string
      })
    ))
  })

  default = {
    enabled        = false
    workspace_name = ""
    permission_statements = []
  }
}

variable "gitlab" {
  type = object({
    enabled   = bool
    group_url = string
    permission_statements = optional(list(
      object({
        Effect   = string
        Action   = list(string)
        Resource = string
      })
    ))
  })

  default = {
    enabled   = false
    group_url = ""
  }
}

variable "bitbucket" {
  type = object({
    enabled          = bool
    workspace_name   = string
    workspace_uuid   = string
    repository_uuids = optional(string)
    permission_statements = optional(list(
      object({
        Effect   = string
        Action   = list(string)
        Resource = string
      })
    ))
  })

  default = {
    enabled        = false
    workspace_name = ""
    workspace_uuid = ""
  }

  validation {
    condition = (
      var.bitbucket.enabled ? (
        length(var.bitbucket.workspace_name) > 0 &&
        length(var.bitbucket.workspace_uuid) > 0
      ) : true
    )
    error_message = "Workspace name and uuid is required. These can be found from OpenId Connect under the pipeline settings."
  }
}
Enter fullscreen mode Exit fullscreen mode

Experiments are required to be enabled to use the optional attribute used in variables.tf

openid_connect/experiments.tf
terraform {
  experiments = [module_variable_optional_attrs]
}
Enter fullscreen mode Exit fullscreen mode

Locals will be used to loop over the map of providers and only select the enabled providers.

In addition to that, the default variables will be overwritten by the custom values you are passing in using the merge function.

openid_connect/locals.tf
locals {
  providers = {
    github    = var.github,
    gitlab    = var.gitlab,
    bitbucket = var.bitbucket
  }

  default = {
    github = {
      identity_provider_url = "token.actions.githubusercontent.com"
      audience              = "sts.amazonaws.com"
      repositories          = ["*"]
      permission_statements = [{
        Effect : "Allow",
        Resource : "*",
        Action : [
          "*"
        ]
      }]
    }

    gitlab = {
      identity_provider_url = "gitlab.com"
      audience              = "https://gitlab.com"
      project_slug          = "*"
      permission_statements = [{
        Effect : "Allow",
        Resource : "*",
        Action : [
          "*"
        ]
      }]
    }

    bitbucket = {
      identity_provider_url = format("api.bitbucket.org/2.0/workspaces/%s/pipelines-config/identity/oidc", var.bitbucket.workspace_name)
      audience              = format("ari:cloud:bitbucket::workspace/%s", replace(var.bitbucket.workspace_uuid, "/[{}]/", ""))
      repository_uuids      = ["*"]
      permission_statements = [{
        Effect : "Allow",
        Resource : "*",
        Action : [
          "*"
        ]
      }]
    }
  }

  enabled_providers = { for provider_key, provider_value in local.providers : provider_key => merge(
    # Add default settings
    local.default[provider_key],

    # Override with module input options
    { for item_key, item_value in provider_value : item_key => item_value if item_value != null }
  ) if provider_value.enabled }
}
Enter fullscreen mode Exit fullscreen mode

In main.tf we will be generating OpenID Connect providers based on the hash returned via tls_certification against the provider URL.

We will also generate a role for each provider and re-format based on which Version Control System provider you are using, note that this is an example so may need altering to meet your needs.

openid_connect/main.tf
data "tls_certificate" "oid_provider" {
  for_each = local.enabled_providers

  url = format("https://%s", each.value.identity_provider_url)
}

resource "aws_iam_openid_connect_provider" "oid_provider" {
  for_each = local.enabled_providers

  url = format("https://%s", each.value.identity_provider_url)

  client_id_list = [
    each.value.audience
  ]

  thumbprint_list = [
  data.tls_certificate.oid_provider[each.key].certificates.0.sha1_fingerprint
  ]
}

data "aws_iam_policy_document" "assuming_role" {
  for_each = local.enabled_providers


  statement {
    actions = ["sts:AssumeRoleWithWebIdentity"]

    // Allow federated access with the oid provider's arn
    principals {
      type = "Federated"
      identifiers = [
        aws_iam_openid_connect_provider.oid_provider[each.key].arn
      ]
    }

    // Allow access when audience is matching the audience value
    condition {
      test     = "StringLike"
      variable = format("%s:aud", each.value.identity_provider_url)

      values = [
        each.value.audience
      ]
    }


    condition {
      test     = "StringLike"
      variable = format("%s:sub", each.value.identity_provider_url)

      values = concat(
        # Github
        each.key == "github" ? formatlist("repo:%s/%s:*", each.value.workspace_name, each.value.repositories) : [],

        # Bitbucket
        each.key == "bitbucket" ? formatlist("%s:*", each.value.repository_uuids) : [],

        compact([
          # Gitlab
          each.key == "gitlab" ? format(
            "*:%s:*:*:*:*",
            join("/",
              slice(
                split("/", replace(each.value.group_url, "https://", "")),
                1,
                length(
                  split("/", replace(each.value.group_url, "https://", ""))
                )
              )
            )
          ) : null,
        ])
      )

    }
  }
}

resource "aws_iam_role" "assuming_role" {
  for_each = local.enabled_providers

  name               = format("identity-provider-%s-assume-role", each.key)
  assume_role_policy = data.aws_iam_policy_document.assuming_role[each.key].json

  inline_policy {
    name = "identity-provider-permissions"

    policy = jsonencode({
      Version   = "2012-10-17",
      Statement = each.value.permission_statements
    })
  }
}
Enter fullscreen mode Exit fullscreen mode


Adding the module

Now we have created the module we will need to import that by creating the root files, the backend will need to be configured to know where to manage the state files.

backend.tf
terraform {
  backend "s3" {
    region  = "eu-west-1"
    bucket  = "test-tfbucket-bgbo1k17nnkd"
    key     = "local.tfstate"
  }
}
Enter fullscreen mode Exit fullscreen mode

The provider specifies which provider to use (e.g. AWS, GCP etc) and applies default tags for all the resources.

provider.tf
provider "aws" {
  region = "eu-west-1"

  default_tags {
    tags = {
      test-pipeline = true
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

We now need to create a module to import the module, along with the permissions that the role will have once assumed, as we are using terraform we will need to have the s3 permissions however this can be limited by the statement and passing in the bucket arn as the resource.

main.tf
locals {
  permission_statements = [
    {
      Effect : "Allow",
      Action : [
        "sns:*",
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject"
      ],
      Resource : "*"
    }
  ]
}

module "openid_connect" {
  source = "./modules/openid_connect"

  github = {
    enabled               = true
    workspace_name        = "lewisstevens1"
    permission_statements = local.permission_statements
  }

  gitlab = {
    enabled               = true
    group_url             = "https://www.gitlab.com/lewisstevens1"
    permission_statements = local.permission_statements
  }

  bitbucket = {
    enabled               = true
    workspace_name        = "lewisstevens1"
    workspace_uuid        = "{fg5125adw3-ab7e-49528-af15-c61ed55112f}"
    permission_statements = local.permission_statements
  }
}
Enter fullscreen mode Exit fullscreen mode


Applying the Terraform

To apply the Terraform, we first need to export the credentials.

To achieve this, you can navigate to secrets manager and retrieve the secret value for the secret "TERRAFORM_CREDENTIALS".

With this you can export the variables and then run the following:

  • terraform init && terraform plan -out terraform.tfplan to initialise and plan Terraform.
  • terraform apply "terraform.tfplan" to apply the changes that were shown in the plan step.

Setting up the pipelines

Now that the AWS account has the OpenId Connect providers configured, we need to set up the pipelines.

These have a slight variation in each provider and each of these templates require the variable AWS_DEFAULT_REGION and ACCOUNT_ID.

BitBucket's token is BITBUCKET_STEP_OIDC_TOKEN and requires oidc to be set to true to export that variable, GitLab however automatically exports CI_JOB_JWT_V2.

Bitbucket

bitbucket-pipelines.yml
# This is a basic image with just terraform and aws cli installed onto it.
image: lewisstevens1/amazon-linux-terraform

aws-login: &aws-login |-
  STS=($( \
    aws sts assume-role-with-web-identity \
      --role-session-name terraform-execution \
      --role-arn arn:aws:iam::$ACCOUNT_ID:role/identity_provider_bitbucket_assume_role \
      --web-identity-token $BITBUCKET_STEP_OIDC_TOKEN \
      --query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
      --output text \
  ));

  export AWS_ACCESS_KEY_ID=${STS[0]};
  export AWS_SECRET_ACCESS_KEY=${STS[1]};
  export AWS_SESSION_TOKEN=${STS[2]};
  export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION;

pipelines:
  branches:
    master:
      - step:
          name: plan-terraform
          oidc: true
          script:
            - *aws-login
            - terraform init && terraform plan

      - step:
          name: apply-terraform
          trigger: 'manual'
          oidc: true
          script:
            - *aws-login
            - terraform init && terraform plan -out terraform.tfplan
            - terraform apply terraform.tfplan
Enter fullscreen mode Exit fullscreen mode

Github

.github/workflows/ci.yml
name: Terraform Pipeline

permissions:
  id-token: write
  contents: read

on:
  push:
    branches: [ master ]

jobs:
  github-pipeline:
    runs-on: ubuntu-latest

    steps:
      - uses: actions/checkout@v2

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
          role-to-assume: arn:aws:iam::${{ secrets.ACCOUNT_ID }}:role/identity_provider_github_assume_role
          aws-region: $AWS_DEFAULT_REGION

      - name: plan-terraform
        working-directory: ./
        run: terraform init && terraform plan -out terraform.tfplan

      - name: apply-terraform
        working-directory: ./
        run: terraform apply terraform.tfplan
Enter fullscreen mode Exit fullscreen mode

Gitlab

.gitlab-ci.yml
# This is a basic image with just terraform and aws cli installed onto it.
image: lewisstevens1/amazon-linux-terraform

stages:
  - test
  - deploy

.aws-login: &aws-login
  - STS=($(
      aws sts assume-role-with-web-identity
      --role-session-name terraform-execution
      --role-arn arn:aws:iam::$ACCOUNT_ID:role/identity_provider_gitlab_assume_role
      --web-identity-token $CI_JOB_JWT_V2
      --query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]"
      --output text
    ));

  - |
    export AWS_ACCESS_KEY_ID=${STS[0]};
    export AWS_SECRET_ACCESS_KEY=${STS[1]};
    export AWS_SESSION_TOKEN=${STS[2]};
    export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION;

plan-terraform:
  stage: test
  script:
    - *aws-login
    - terraform init && terraform plan

apply-terraform:
  stage: deploy
  when: manual
  script:
    - *aws-login
    - terraform init && terraform plan -out terraform.tfplan
    - terraform apply terraform.tfplan
Enter fullscreen mode Exit fullscreen mode


Conclusion

Now you will be left with a pipeline that you can add your Terraform pipelines and each commit to master will cause the Plan and Apply of Terraform.

In the next article, we will discuss ways to improve the pipeline setup.

Repository:
https://github.com/lewisstevens1/terraform_openid_connect

Top comments (5)

Collapse
 
svasylenko profile image
Serhii Vasylenko

Interesting approach! 👍

Could you please clarify what was that "Access Key" mentioned? What it does and where do you store it in such a case?

To apply the Terraform, we first need to go to the user that was created in the CloudFormation template and create an Access Key.

Collapse
 
lewisstevens1 profile image
Lewis Stevens

Hello Serhii,

I have just updated the CloudFormation template to create the access keys and store them into secrets manager, previously it would have been manually creating by going into the user and creating the Access Key.

This is to implement this when services like SSO are not in place, as with those credentials you can export directly without needing that user to be created (As long as the SSO role has the required permission).

Hope this helps.

Kind regards,
Lewis

Collapse
 
svasylenko profile image
Serhii Vasylenko

Thank you, Lewis!

Thread Thread
 
lewisstevens1 profile image
Lewis Stevens

No worries!

Collapse
 
stormytalent profile image
StormyTalent

Perfect!