DEV Community

Cover image for Create Test Coverage Visualizer and Deploy to AWS S3 with Cloud development Kit (CDK) and GitHub Actions
Kevin Lactio Kemta
Kevin Lactio Kemta

Posted on

Create Test Coverage Visualizer and Deploy to AWS S3 with Cloud development Kit (CDK) and GitHub Actions

Day 014 - 100DaysAWSIaCDevopsChallenge

In my previous article, Configuring CI/CD for NPM Library Publishing with GitHub Actions, I explored automating various tasks with GitHub Actions. In this follow-up, I'll focus on how to integrate the automation of test coverage uploads to an S3 bucket.

Why Centralize Test Coverage Reporting?

In the software development, maintaining high code quality is crucial for delivering reliable and efficient products. One of the key aspects of ensuring code quality is monitoring code coverage, the measure of how much of your code is tested. However, as projects grow in complexity, keeping track of coverage metrics across multiple components and teams can become challenging. This is why centralizing your test coverage becomes more valuable by bringing all your coverage data into a single, unified platform. You can streamline analysis, enhance collaboration, and ensure that your entire codebase meets the necessary standards. In this article, I will show you how I centralized the tests coverage of my NPM library.

The process will involve the following steps:

  • Create and AWS infrastructure for hosting the coverage visualizer - Set up the necessary AWS resources to host and securely serve coverage visualizer on the internet. This involves using AWS S3 for storage and AWS CloudFront as a Content Delivery Network (CDN) to improve availability and security.

  • Create user with less privilege to update files to the bucket - Enhance security by creating an AWS IAM user with limited permissions to upload files to the S3 bucket for security reason. This user will be used by your GitHub Actions workflow to push the coverage reports to S3.

  • Set Up the GitHub Actions job to upload test coverage file to S3 Bucket - Automate the process of uploading test coverage reports to the S3 bucket after every successful release using GitHub Actions. This ensures that the latest coverage data is always available on the visualizer. Also give the ability to run this job manually.

Flow diagram

Image description

1. Create and AWS infrastructure for hosting the coverage visualizer

In a previous segment of the 100 Days of Code Challenge series, I demonstrated how to create and host an Angular App in an S3 Bucket as a static website. In this section, I will use the same AWS resources to host the coverage visualizer. This will involve the following:

  • S3 Bucket We'll use S3 to store the visualizer files.
  • AWS CloudFront for Content Distribution - CloudFront will distribute the content globally, just like an S3 static website. We will secure it with HTTPS by using SSL at this layer. Additionally, we'll configure S3 to accept traffic only from CloudFront and enforce that all traffic must be secure.
  • Route 53 for Subdomain Configuration We will configure a subdomain for your visualizer using Route 53.
  • IAM User We'll create an IAM user that GitHub Actions will use to upload files to the S3 Bucket. This IAM user will have limited permissions, specifically only to PutObject inside the bucket, ensuring a minimal access for better security need.
1.a S3 Bucket
interface S3WbesiteProps extends StackProps {
  origins?: string[]
}
export class S3Website extends Construct {
  private readonly _bucket: s3.IBucket
  private readonly _baseSegment: string
  constructor(scope: Construct, id: string, private props: S3WbesiteProps) {
    super(scope, id)
    const bucketName = 'coverage-visualizer-sws-o8stnnkqmos1v'
    const defSeg = 'co2visualizer'
    const wsBucket = new s3.Bucket(this, 'WebsiteBucket', {
      bucketName: bucketName,
      versioned: false,
      blockPublicAccess: {
        blockPublicPolicy: true,
        blockPublicAcls: false,
        ignorePublicAcls: false,
        restrictPublicBuckets: true
      },
      objectLockEnabled: true,
      objectOwnership: s3.ObjectOwnership.BUCKET_OWNER_PREFERRED,
      websiteIndexDocument: `index.html`,
      websiteErrorDocument: `error.html`,
      autoDeleteObjects: true,
      removalPolicy: RemovalPolicy.DESTROY
    })
    wsBucket.addCorsRule({
      allowedMethods: [
        s3.HttpMethods.HEAD,
        s3.HttpMethods.GET
      ],
      allowedOrigins: props.origins || ['*'],
      maxAge: Duration.minutes(5).toSeconds()
    })
    wsBucket.addLifecycleRule({
      enabled: true,
      expiration: Duration.days(31),
      transitions: [{
        storageClass: s3.StorageClass.ONE_ZONE_INFREQUENT_ACCESS,
        transitionAfter: Duration.days(60)
      }]
    })
    this._bucket = wsBucket
    this._baseSegment = defSeg
  }
  get bucket() {
    return this._bucket
  }
  get baseSegment() {
    return this._baseSegment
  }
}
Enter fullscreen mode Exit fullscreen mode
  • versioned - is set to false to disable versioning of objects in the bucket. This configuration ensures that only the latest version of each file is retained, which is ideal for scenarios where you only need the most recent file.
  • blockPublicAccess.blockPublicPolicy - to enforcing a critical security measure that prevents the bucket from being publicly accessible via a bucket-wide public policy.
  • objectLockEnabled - Enables object lock for data protection
  • objectOwnership- is setted to BUCKET_OWNER_PREFERRED, to ensure that the bucket owner retains control over all objects, making access management easier, more secure, and consistent. This setting is particularly useful in environments where multiple users or automated processes are uploading objects to the bucket.
  • autoDeleteObjects - automatically deletes all objects when the bucket is destroyed.
  • Lifecycle Rules - Manages the storage of objects in the bucket

    wsBucket.addLifecycleRule({
        enabled: true,
        expiration: Duration.days(90),
        transitions: [{
            storageClass: s3.StorageClass.INFREQUENT_ACCESS,
            transitionAfter: Duration.days(30)
        }]
    })
    
    • expiration - deletes objects after 90 dayls
    • transitionAfter - Moves Obejects to Infrequent Access storage after 30 days.

This CDK Construct automates the setup of an S3 bucket configured as a static website, with additional features like CORS, object lifecycle management, and security controls. It abstracts the complexity of configuring these AWS services, providing a reusable and secure way to host static content.

1.b. CloudFront & Route 53 Subdomain configuration

export interface WebsiteDistributionProps extends StackProps {
  websiteBucketName: string;
  mainDomain: string,
  acm: {
    certificateArn: string
  },
  distribution: {
    subDomain: string
  }
}
export class WebsiteDistribution extends Construct {
  private readonly _distributionUrl: string
  private readonly _distributionArn: string
  private readonly _viewersUrl: string
  constructor(scope: Construct, id: string, props: WebsiteDistributionProps) {
    super(scope, id)
    const bucket = s3.Bucket.fromBucketName(this, `BucketName_${id}`, props.websiteBucketName!)
    const certificate = acm.Certificate.fromCertificateArn(this, `Certificate_${id}`, props.acm?.certificateArn!)
    const cachePolicy = new cf.CachePolicy(this, `Cache-${id}-Policy`, {
      cachePolicyName: `Cache-${id}-Policy`,
      enableAcceptEncodingGzip: true,
      enableAcceptEncodingBrotli: true,
      queryStringBehavior: cf.CacheQueryStringBehavior.all(),
      cookieBehavior: cf.CacheCookieBehavior.none(),
      defaultTtl: Duration.seconds(30),
      headerBehavior: cf.CacheHeaderBehavior.allowList(
        'Origin',
        'Accept',
        'Access-Control-Request-Method',
        'Access-Control-Request-Headers'
      )
    })
    const originRequestPolicy = new cf.OriginRequestPolicy(this, `Origin-Request-${id}-Policy`, {
      originRequestPolicyName: `Origin-Request-${id}-Policy`,
      queryStringBehavior: cf.CacheQueryStringBehavior.all(),
      cookieBehavior: cf.CacheCookieBehavior.none(),
      headerBehavior: cf.CacheHeaderBehavior.allowList(
        'Origin',
        'Accept',
        'Access-Control-Request-Method',
        'Access-Control-Request-Headers'
      )
    })
    const distribution = new cf.Distribution(this, `SSLCertificate_${id}`, {
      enabled: true,
      defaultBehavior: {
        origin: new origins.HttpOrigin(bucket.bucketWebsiteDomainName, {
          protocolPolicy: cf.OriginProtocolPolicy.HTTP_ONLY,
          httpPort: 80,
          httpsPort: 443,
          connectionTimeout: Duration.seconds(10)
        }),
        allowedMethods: cf.AllowedMethods.ALLOW_ALL,
        cachedMethods: cf.CachedMethods.CACHE_GET_HEAD_OPTIONS,
        cachePolicy,
        originRequestPolicy
      },
      certificate,
      httpVersion: cf.HttpVersion.HTTP2,
      domainNames: [props.distribution?.subDomain].filter(value => !!value)
    })
    const hostedZone = route53.HostedZone.fromLookup(this, `HostedZone_${id}`, {
      domainName: props.mainDomain!
    })
    const cnameRecord = new route53.CnameRecord(this, `DomainCNAME_${id}`, {
      recordName: props.distribution?.subDomain + '.',
      domainName: distribution.distributionDomainName,
      zone: hostedZone,
      deleteExisting: true,
      ttl: Duration.minutes(10),
      comment: `RecordSet to send traffic from ${props.distribution?.subDomain} to ${distribution.distributionDomainName}`
    })
    this._distributionUrl = distribution.distributionDomainName
    this._viewersUrl = cnameRecord.domainName
    this._distributionArn = `arn:aws:cloudfront::${props.env?.account}:distribution/${distribution.distributionId}`
  }
  get distributionArn(): string {
    return this._distributionArn
  }
  get distributionUrl(): string {
    return this._distributionUrl
  }
  get viewsUrl(): string {
    return this._viewersUrl
  }
}
Enter fullscreen mode Exit fullscreen mode

The WebsiteDistribution construct efficiently sets up a secure, high-performance website delivery system using CloudFront, S3, ACM, and Route 53. It automates the infrastructure needed to serve static websites with custom domains and SSL/TLS encryption.

1.c. Visualizer Stack

interface CustomProps extends cdk.StackProps {
  domain: string
  certificateArn: string
}
export class CoverageVisualizerStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props: CustomProps) {
    super(scope, id, props)
    const bucket = new S3Website(this, 'Coverage-Visualizer-Bucket', {
      origins: [`https://visualizer.${props.domain}`]
    })
    const distribution = new WebsiteDistribution(this, 'CloudFront', {
      mainDomain: props.domain,
      websiteBucketName: bucket.bucket.bucketName,
      distribution: {
        subDomain: `visualizer.${props.domain}`
      },
      acm: {
        certificateArn: props.certificateArn
      }
    })
    const cloudFrontPolicy = new aws_iam.Policy(this, 'S3PolicyForCloudFront', {
      roles: [
        new aws_iam.Role(this, 'S3PolicyRoleForCloudFront', {
          assumedBy: new aws_iam.ServicePrincipal('s3.amazonaws.com'),
          path: '/',
          inlinePolicies: {
            s3: new aws_iam.PolicyDocument({
              assignSids: true,
              statements: [
                new aws_iam.PolicyStatement({
                  effect: Effect.ALLOW,
                  actions: ['s3:GetObject'],
                  resources: [bucket.bucket.bucketArn + '/*'],
                  conditions: {
                    Bool: { 'AWS:SecureTransport': true },
                    StringEquals: { 'AWS:SourceArn': distribution.distributionArn }
                  }
                })
              ]
            })
          }
        })
      ]
    })
    const user = new aws_iam.User(this, 'S3Uploader', {
      userName: 'S3-Uploader',
      passwordResetRequired: false
    })
    user.addToPolicy(new aws_iam.PolicyStatement({
      actions: ['s3:PutObject','s3:PutObjectAcl'],
      effect: Effect.ALLOW,
      resources: [bucket.bucket.bucketArn + '/' + bucket.baseSegment + '/*']
    }))
  }
}
Enter fullscreen mode Exit fullscreen mode
  • CloudFront Distribution
    • mainDomain - The primary domain (e.g., example.com).
    • websiteBucketName - The name of the S3 bucket where the visualizer is hosted.
    • distribution.subDomain - The subdomain (e.g., visualizer.example.com) for the CloudFront distribution.
    • acm.certificateArn - The ARN of the SSL certificate used to secure the CloudFront distribution.
  • IAM Policy for CloudFront to Access S3 - A policy is created that allows CloudFront to access the S3 bucket
    • AWS:SecureTransport- Ensures that only secure requests (HTTPS) are allowed
    • AWS:SourceArn - Restricts access to requests coming from the specified CloudFront distribution
  • IAM User for S3 Uploads An IAM user is created specifically to upload files to the S3 bucket
    • Policy Attached - The user is granted permissions to perform the s3:PutObject action, limited to a specific path in the S3 bucket (defined by bucket.baseSegment).

This stack sets up an infrastructure to securely host a coverage visualizer on an S3 bucket and distribute it via CloudFront. It includes the following:

  • S3 Bucket: For hosting static files.
  • CloudFront Distribution: For secure, high-performance delivery of content.
  • IAM Policies: For secure access control, ensuring that only authorized entities (like CloudFront and a specific IAM user) can interact with the S3 bucket.

This setup ensures that the visualizer is securely accessible over HTTPS, with controlled access to the underlying S3 bucket.

1.e. Deploy infrastructure

git clone https://github.com/nivekalara237/100DaysTerraformAWSDevops.git

cd 100DaysTerraformAWSDevops/day_014
export DOMAIN="yourdomain.com"
export CERT_ARN="arn:aws:acm:us-east-1:xxxx:certificate/xxxx-xxxx-xxxx-xxxxxxxx"

cdk deploy --profile cdk-user --all
Enter fullscreen mode Exit fullscreen mode

2. Set Up the GitHub Actions job to upload test coverage file to S3 Bucket

To automate the process of uploading test coverage files to an S3 bucket using GitHub Actions, you'll need to configure the workflow with the necessary AWS credentials. First, log in to the AWS Management Console and navigate to the IAM service. Then, generate the access keys for the S3-Uploader user that was previously created after the infrastructure was deployed.

name: "Deploy Test Cov to S3 Visualizer"
on:
  workflow_run:
    types:
      - completed
    workflows:
      - Release
  workflow_dispatch:
    inputs:
      environment:
        default: ENV
        type: environment
        description: 'Pick Environment'

jobs:
  deploy-to-s3-bucket:
    name: 'Upload Test Result'
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: 18.x
      - name: Run test
        run: |
          npm ci && npm test
      - name: Upload Files to S3
        uses: shallwefootball/s3-upload-action@master
        id: S3
        with:
          aws_bucket: ${{ secrets.BUCKET_NAME }}
          aws_key_id: ${{ secrets.AWS_ACCESS_KEY }}
          aws_secret_access_key: ${{ secrets.AWS_SECRET_KEY }}
          source_dir: 'coverage'
          destination_dir: 'co2visualizer/${{ github.ref_name }}'
      - name: Display URL
        run: |
          echo "${{ env.VISUALIZER_URL }}/${{ steps.S3.outputs.object_key }}/index.html"
Enter fullscreen mode Exit fullscreen mode

If you want to know how to configure GitHub Actions Workflow please follow my previous article here

on:
  workflow_run:
    types:
      - completed
    workflows:
      - Release
Enter fullscreen mode Exit fullscreen mode

Specify that the workflow is triggered only when a release workflow is created, as defined in release.yml.

on: 
  workflow_dispatch:
    inputs:
      environment:
        default: ENV
        type: environment
        description: 'Pick Environment'
Enter fullscreen mode Exit fullscreen mode

Specify that the workflow can also be triggered manually.

Image description

And the Test Coverage Visualizer

Image description

Happy Coding!!


Your can find the full source code on GitHub Repo↗

Top comments (0)