DEV Community

Daniel Slapelis
Daniel Slapelis

Posted on

Building a Simple CI/CD Pipeline with GitLab, Terraform, and Amazon Web Services

How to build a CI/CD pipeline using GitLab for your business's website.

Prerequisites

  1. A GitLab account
  2. An AWS account
  3. Terraform is installed
  4. A KeyBase account
  5. A domain managed in Route53
  6. An ACM certificate for your domain.

Set up the infrastructure

We'll be using Terraform to build out the infrastructure. For the website, all we'll need is an S3 bucket and a CloudFront deployment.

Create a file named main.tf and paste this into. You can change the bucket name to whatever you want, just make sure you set this correctly later on in another file (you'll see).

variable "bucket_name" {
  default = "website.example.com"
}

variable "cnames" {
  type    = list(string)
  default = ["example.com", "www.example.com"]
}

variable "certificate_arn" {
  default = "arn:::acm::example23132423"
}

provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "bucket" {
  bucket = "${var.bucket_name}"
  acl    = "private"
  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
      {
          "Sid": "AddPerm",
          "Effect": "Allow",
          "Principal": "*",
          "Action": "s3:GetObject",
          "Resource": "arn:aws:s3:::${var.bucket_name}/*"
      }
  ]
}
EOF

  website {
    index_document = "index.html"
    error_document = "index.html"
  }

  tags = {
    billing = "${var.billing_tag}"
  }
}

locals {
  s3_origin_id = "S3-${var.bucket_name}"
}

resource "aws_cloudfront_distribution" "s3_distribution" {
  origin {
    domain_name = "${aws_s3_bucket.bucket.bucket_regional_domain_name}"
    origin_id = "${local.s3_origin_id}"
  }

  wait_for_deployment = false

  enabled = true
  is_ipv6_enabled = true
  default_root_object = "index.html"

  aliases = "${var.cnames}"

  default_cache_behavior {
    allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
    cached_methods = ["GET", "HEAD"]
    target_origin_id = "${local.s3_origin_id}"

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }

    viewer_protocol_policy = "redirect-to-https"
    min_ttl = 0
    default_ttl = 3600
    max_ttl = 86400
  }


  price_class = "PriceClass_100"

  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }

  custom_error_response {
    error_code = 403
    error_caching_min_ttl = 0
    response_code = 200
    response_page_path = "/index.html"
  }

  viewer_certificate {
    acm_certificate_arn = "${var.certificate_arn}"
    ssl_support_method = "sni-only"
    minimum_protocol_version = "TLSv1.2_2018"
  }
}

Enter fullscreen mode Exit fullscreen mode

Now create the bucket

terraform init
terraform apply -auto-approve
Enter fullscreen mode Exit fullscreen mode

Awesome! All of our infrastructure is set up in AWS and now we need to set up our GitLab runner!

Configure our runner

In GitLab, create a repository for your project, or use an existing repository. We just need to do a few things before our app is ready to deploy.

First, let's set up our runner. We're going to use a shared runner from GitLab. They're free to use up for up to 2,000 minutes of deployments per month -- and they're enabled by default.

shared runners

We need to give it an AWS account to use to deploy to S3. Create another terraform config with this content:

variable "keybase_user" {
  description = "A keybase username to encrypt the secret key output."
  default     = "dannextlinklabs"
}

provider "aws" {
  region = "us-east-1"
}


resource "aws_iam_access_key" "gitlab_ci" {
  user    = "${aws_iam_user.gitlab_ci.name}"
  pgp_key = "keybase:${var.keybase_user}"
}

resource "aws_iam_user_policy" "gitlab_ci" {
  name = "gitlab-ci-policy"
  user = "${aws_iam_user.gitlab_ci.name}"

  policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::website.example.com/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "cloudfront:*",
            "Resource": "*"
        }
    ]
}
EOF
}

resource "aws_iam_user" "gitlab_ci" {
  name = "gitlab-ci"
}

output "access_key" {
  value = "${aws_iam_access_key.gitlab_ci.id}"
}

output "secret_access_key" {
  value = "${aws_iam_access_key.gitlab_ci.encrypted_secret}"
}

Enter fullscreen mode Exit fullscreen mode

Make sure you set the keybase user to your own keybase user. Okay whatever, run the config.

terraform init
terraform apply -auto-approve
Enter fullscreen mode Exit fullscreen mode

The terraform config returns an access key and a secret key for this user. We need to decrypt the secret key with the command (this is why you needed to use your own keybase user).

terraform output encrypted_secret | base64 --decode | keybase pgp decrypt
Enter fullscreen mode Exit fullscreen mode

Now that we have the access key and the secret key for our GitLab user, we need to supply those variables to our runner by adding them to the variables section in the CI/CD settings.

variables

We set three variables:

AWS_ACCESS_KEY_ID - to the key we just got
AWS_SECRET_ACCESS_KEY - to the key we just got
AWS_DEFAULT_REGION - us-east-1
Enter fullscreen mode Exit fullscreen mode

Run a deployment

GitLab CI/CD is based around a file called .gitlab-ci.yml. Our file needs to look like this:

stages:
  - deploy-s3
  - deploy-cf

variables:
  AWS_BUCKET: website.example.com

deploy_s3:
  image: python:3.6
  stage: deploy-s3
  tags:
    - docker
    - gce
  before_script:
    - pip install awscli -q
  script:
    - aws s3 sync . s3://$AWS_BUCKET/ --delete --acl public-read
  only:
    - master

deploy_cf:
  image: python:3.6
  stage: deploy-cf
  tags:
    - docker
    - gce
  before_script:
    - pip install awscli -q
  script:
    - export distId=$(aws cloudfront list-distributions --output=text --query 'DistributionList.Items[*].[Id, DefaultCacheBehavior.TargetOriginId'] | grep "S3-$AWS_BUCKET" | cut -f1) 
    - while read -r dist; do aws cloudfront create-invalidation --distribution-id $dist --paths "/*"; done <<< "$distId"
  only:
    - master

Enter fullscreen mode Exit fullscreen mode

This gitlab-ci file sets up two stages: deploy-s3, and deploy-cf. The first stage uploads our application to our S3 bucket, and the second invalidates the CloudFront zone for that bucket to present the new changes to our website!

This simple configuration is all you need to have a complete CI/CD pipeline for your business' website.

This post first appeared on our blog where we write about devops and devops consulting services.

Oldest comments (0)