DEV Community

Cover image for API's From Dev to Production - Part 8 - Status Checks
Pete King
Pete King

Posted on • Updated on


API's From Dev to Production - Part 8 - Status Checks

Series Introduction

Welcome to Part 8 of this blog series that will go from the most basic example of a .net 5 webapi in C#, and the journey from development to production with a shift-left mindset. We will use Azure, Docker, GitHub, GitHub Actions for CI/C-Deployment and Infrastructure as Code using Pulumi.

In this post we will be looking at:

  • More code coverage
    • GitHub Status checks


We take advantage of GitHub Status Checks, ensuring our codebase doesn't decrease against our predefined code coverage target; blocking PR's (pull-requests) from being merged. We set a target of 80% knowing our current code coverage is only 36%, we take decisive action to exclude classes from coverage where in this case we believe it is safe. We integrate it all together step by step and achieve our desired outcome of a new CI workflow on PR's with this code coverage gate, and boost code coverage to 95%.

GitHub Repository

GitHub logo peteking / Samples.WeatherForecast-Part-8

This repository is part of the blog post series, API's from Dev to Production - Part 8 on Based on the standard .net standard Weather API sample.


Code coverage is an important metric, out of context it can be a disaster metric, if you are collecting code coverage for a repository, ensuring you have an agreed target in the engineering team and protecting your codebase is of upmost importance.

How do we ensure there is a target, how to ensure our codebase doesn't get worse with every PR (pull-request)?

We can take advantage of GitHub Status Checks, with Codecov and other tools, there is tight integration with status checks.

In this post, let's explore how we can achieve our desired outcome.


We will be picking-up where we left off in Part 7, which means you’ll need the end-result from GitHub Repo - Part 7 to start with.

If you have followed this series all the way through, and I would encourage you to do so, but it isn't necessary if previous posts are knowledge to you already.

Don't forget to ensure you have setup with your repository, including the Codecov GitHub Bot - For details on how to set it up, please see Part 7


CI Workflow

  1. Create a new branch, call it say, codecov.
  2. Add a new yaml file called, ci-pull-request.yml under your .github\workflows folder.
  3. Add the following code to your new workflow:
name: CI Pull Request

    branches: [ main ]


  image-name-unit-tests: unit-tests:latest

    runs-on: ubuntu-latest

      - name: Checkout repo
        uses: actions/checkout@v2
          # Disabling shallow clone is recommended for improving relevancy of reporting
          fetch-depth: 0

      - name: Unit tests [build]
        run: docker build --target unit-test -t ${{ env.image-name-unit-tests }} .

      - name: Unit tests [run]
        run: docker run --rm -v ${{ github.workspace }}/path/to/artifacts/testresults:/code/test/Samples.WeatherForecast.Api.UnitTest/TestResults ${{ env.image-name-unit-tests }}

      - name: Code coverage [codecov]
        uses: codecov/codecov-action@v1.2.1
          files: ${{ github.workspace }}/path/to/artifacts/testresults/
          verbose: true
Enter fullscreen mode Exit fullscreen mode

You'll notice this is pretty similar to our other main workflow, albeit smaller - We are simply building the unit test image and running it and sending the coverage to Codecov

Code coverage configuration

  1. Add the following to your codecov.yml file:
        target: 80%    # the required coverage value
        threshold: 1%  # the leniency in hitting the target
Enter fullscreen mode Exit fullscreen mode

Create pull request

  1. Create a PR (pull request)
  2. Using GitHub / GitHub Desktop / Git cmdline, push your PR

Once the PR is opened, it will kick-off our new workflow, and it will wait for Codecov to come back.

Alt Text

Our open pull request

Once Codecov comes back, the status is updated.

Alt Text

We have set the minimum target of 80%, and unfortunately we have a low of 36%.

It will block our pull request.

This can be a great CI workflow to ensure coverage is at your minimum expectations. This however, cannot tell if your unit tests are good unit tests vs bad unit tests though!

Why is the coverage so low?

As you saw in the previous post, we have no unit tests covering Program.cs and Startup.cs. This skews our actual coverage, and for us I'm happy to not cover these with unit test, so I'd rather exclude those for now.

We could however, cover them with integration tests.

There is an attribute in .net to denote that a class etc. is excluded from coverage. This is called, ExcludeFromCodeCoverageAttribute. For more information, please visit ExcludeFromCodeCoverageAttribute Class

Place this attribute on both, Program.cs and Startup.cs.

using System.Diagnostics.CodeAnalysis;

namespace Samples.WeatherForecast.Api
    public class Program
Enter fullscreen mode Exit fullscreen mode
using System.Diagnostics.CodeAnalysis;

namespace Samples.WeatherForecast.Api
    public class Startup
Enter fullscreen mode Exit fullscreen mode

Test locally

Kick-off your local unit-tests by executing .\unit-test.ps1, you should see a big improvement in coverage as we have decisively and specifically excluded items.

Alt Text

Alt Text

We can see that the is now a difference between what was covered previously and what is covered now.

Alt Text

Given we are above the target of 80%, the GitHub Status Check will allow it to pass.

Alt Text

Our code coverage goes up as indicated, we can also see this directly in; a nice graph, and who doesn't love a nice graph!

Alt Text

Our files are excluded from coverage, hence the code coverage boost as we expected.

Alt Text

Our badge proudly displays our code coverage percentage.

Go one step further

As you can see from our changes, we've created a new workflow and we have GitHub Status Checks to ensure we have quality gates with our code based on a code coverage target. We can go one step further with this PR workflow by including most of our other main workflow...

We can add the App [build], App [scan] steps, one thing we should not do though, is publish to our container registry, we certainly don't want to do that; as we are only at the PR stage!

The benefit of doing this is running the container scan on a final build image, and we will break our build if the scan fails based on our configuration for it. We can also introduce another GitHub Status Check to ensure the ci workflow completes successfully.

Let's give this a go...

  image-name:${{ github.sha }}
Enter fullscreen mode Exit fullscreen mode
      - name: App [build]
        run: docker build -t ${{ env.image-name }} .

      - name: App [scan]
        uses: azure/container-scan@v0
          image-name: ${{ env.image-name }}
          severity-threshold: MEDIUM
          username: ${{ github.repository_owner }}
          password: ${{ secrets.GH_CR }}
Enter fullscreen mode Exit fullscreen mode

Next step

Repository Settings → Branch Rule → Check for ci to mark it as Required

Alt Text

Final step

Commit → Push → Create Pull-Request

You should now see your CI workflow run and go through the status checks. If the CI workflow fails, for example, the App [scan] step finds a CVE and it is deemed we do not pass, this will stop engineers from merging in code that will end include an open CVE.

Alt Text

All status checks pass!

We have some duplication

Yes, we now have some duplicated yaml code in our CI workflow and our build and push workflow. We can extract and share this out potentially with composite GitHub Action Workflows. However, I have not covered this in this post. This may be covered in another post or it may be tackled down the line in this blog series.

What have we learned?

We have learned GitHub Status Checks are powerful tools and how to take advantage of them. We have configured a new CI workflow that acts as a quality gate for code coverage with a minimum of 80% to pass; we can be somewhat happy that our codebase won't degrade its code coverage over time with each PR.

We extended our CI workflow with our build and scan steps to protect ourselves from potential CVE's before merging; and finally building and pushing our image.

If we see an open CVE when doing our PR, we can investigate, rectify if needed and re-submit the PR. Long-live shift-left!

Next up

Part 9 in this series will be about:

  • Static code analysis (SCA)

More information

Top comments (0)