DEV Community

Cover image for Increase Your Code Coverage with Cover-Agent
David Fagbuyiro
David Fagbuyiro

Posted on

Increase Your Code Coverage with Cover-Agent

Writing strong code is just half the battle. Are you struggling to ensure your software is thoroughly tested? Introducing Cover-Agent, the innovative tool that leverages AI to automate test generation and boost your code coverage!

This article will help you to understand what cover agent is, the benefits, the installation and the process of implementation.

Importance of Code Coverage and Testing

Code coverage is a critical metric in software development, indicating the extent to which the source code is tested by the suite of automated tests. High code coverage ensures that most parts of the codebase are executed during testing, which helps identify and eliminate bugs, ensuring the software's reliability and robustness. Effective testing practices contribute to the overall quality, security, and maintainability of the software, reducing the risk of unexpected failures in production.

Challenges of Achieving High Code Coverage

Despite its importance, achieving high code coverage presents several challenges. Developers often face time constraints, resource limitations, and the complexity of modern software systems. Writing comprehensive tests is a time-consuming task that requires a deep understanding of the codebase. Additionally, certain parts of the code, such as edge cases and error handling paths, can be particularly difficult to cover. These challenges can result in suboptimal coverage and potentially undiscovered bugs.

The Problem: Why Code Coverage Matters

Imagine building a house without a blueprint or flying a plane without testing its engines. Software development without thorough testing carries similar risks. Here's why code coverage is crucial:

  • Hidden Bugs: Low code coverage means untested sections of code. These hidden areas are breeding grounds for bugs that can cause crashes, unexpected behavior, and security vulnerabilities.
  • Unreliable Software: Software with low coverage might function under ideal conditions, but real-world use involves edge cases and unexpected inputs. Uncovered code is more likely to fail under pressure, leading to frustrated users and potential downtime.
  • Maintenance Challenges: Untested code is poorly understood code. Making changes or fixing bugs in these areas becomes a guessing game, increasing development time and costs. In short, neglecting code coverage is a recipe for disaster. But achieving high coverage traditionally requires a lot of manual effort...

Risks of Low Code Coverage

Low code coverage can lead to significant risks in software development. Uncovered code segments may harbor undetected bugs, which can result in software crashes, security vulnerabilities, and poor user experiences. These issues can be costly to fix post-release and can damage the reputation of the development team or organization. Furthermore, low coverage may indicate inadequately tested integrations and functionalities, increasing the risk of system failures in real-world usage.

Traditional Methods for Increasing Coverage (Limitations)

Traditional methods for increasing code coverage often involve manual test writing and code reviews. While these methods can be effective, they are typically labor-intensive and time-consuming. Developers must meticulously write tests for various scenarios, which can be prone to human error and oversight. Additionally, maintaining a high level of coverage can become increasingly difficult as the codebase grows and evolves. Automated testing frameworks can help, but they often require significant initial setup and continuous maintenance, which can detract from development time.

Introducing Cover-Agent: Your AI-Powered Testing Friend

Cover-Agent is an advanced AI-powered tool designed to assist developers in achieving higher code coverage effortlessly. By utilizing sophisticated machine learning techniques, Cover-Agent automatically analyzes the codebase, identifies untested areas, and generates comprehensive test cases. This proactive approach ensures that even the most challenging and overlooked parts of the code are tested, improving overall software quality.

The Cover-Agent allows you to easily handle tedious yet critical tasks, such as increasing test coverage. Cover-Agent is at the forefront of integrating state-of-the-art test generation technology, beginning with regression tests, as an open-source project, Cover-Agent continuously strive for improvement and warmly invite contributions to their open-source software.

Key Features and Benefits of Cover-Agent

Below are some of the benefits of Cover-Agent:

  • Automated Test Generation: Cover-Agent excels in automated test generation, creating a wide range of test cases based on the code's structure and behavior. This automation reduces the manual effort required from developers, allowing them to focus on more complex and creative aspects of development. By systematically targeting untested code, Cover-Agent ensures that no critical functionality is left unexamined.
  • Increased Efficiency and Time Savings: One of the primary benefits of Cover-Agent is the significant time savings it offers. By automating the test generation process, developers can achieve high code coverage much faster than traditional methods. This efficiency allows teams to adhere to tight development schedules without compromising on testing rigor. Furthermore, Cover-Agent's integration into the development pipeline ensures continuous coverage improvement with minimal manual intervention.
  • Improved Test Quality and Focus: Cover-Agent not only increases the quantity of tests but also enhances their quality. The AI algorithms are designed to create targeted, relevant test cases that focus on critical paths and edge cases within the code. This targeted approach ensures that the most important aspects of the application are thoroughly tested, leading to more robust and reliable software. Additionally, Cover-Agent can adapt to changes in the codebase, continuously updating and refining tests to maintain optimal coverage.

Getting Started with Cover-Agent

Below are the installation step and guide

Installation and Usage

Requirements
Before you begin, make sure you have the following:

OPENAI_API_KEY set in your environment variables, which is required for calling the OpenAI API.
If running directly from the repository you will also need:

  • Python installed on your system.
  • Poetry installed for managing Python package dependencies. Installation instructions for Poetry can be found at https://python-poetry.org/docs/.

Standalone Runtime

The Cover Agent can be installed as a Python Pip package or run as a standalone executable.

Python Pip
To install the Python Pip package directly via GitHub run the following command:



pip install git+https://github.com/Codium-ai/cover-agent.git


Enter fullscreen mode Exit fullscreen mode

Binary
The binary can be run without any Python environment installed on your system (e.g. within a Docker container that does not contain Python). You can download the release for your system by navigating to the project's release page.

Repository Setup
Run the following command to install all the dependencies and run the project from source:



poetry install


Enter fullscreen mode Exit fullscreen mode

Running the Code
After downloading the executable or installing the Pip package you can run the Cover Agent to generate and validate unit tests. Execute it from the command line by using the following command:



cover-agent \
  --source-file-path "<path_to_source_file>" \
  --test-file-path "<path_to_test_file>" \
  --code-coverage-report-path "<path_to_coverage_report>" \
  --test-command "<test_command_to_run>" \
  --test-command-dir "<directory_to_run_test_command>" \
  --coverage-type "<type_of_coverage_report>" \
  --desired-coverage <desired_coverage_between_0_and_100> \
  --max-iterations <max_number_of_llm_iterations> \
  --included-files "<optional_list_of_files_to_include>"


Enter fullscreen mode Exit fullscreen mode

You can use the example projects within this repository to run this code as a test.

Follow the steps in the README.md file located in the templated_tests/python_fastapi/ directory, then return to the root of the repository and run the following command to add tests to the python fastapi example:



cover-agent \
  --source-file-path "templated_tests/python_fastapi/app.py" \
  --test-file-path "templated_tests/python_fastapi/test_app.py" \
  --code-coverage-report-path "templated_tests/python_fastapi/coverage.xml" \
  --test-command "pytest --cov=. --cov-report=xml --cov-report=term" \
  --test-command-dir "templated_tests/python_fastapi" \
  --coverage-type "cobertura" \
  --desired-coverage 70 \
  --max-iterations 10


Enter fullscreen mode Exit fullscreen mode

For an example using Go cd into templated_tests/go_webservice, set up the project following the README.md and then run the following command:



cover-agent \
  --source-file-path "app.go" \
  --test-file-path "app_test.go" \
  --code-coverage-report-path "coverage.xml" \
  --test-command "go test -coverprofile=coverage.out && gocov convert coverage.out | gocov-xml > coverage.xml" \
  --test-command-dir $(pwd) \
  --coverage-type "cobertura" \
  --desired-coverage 70 \
  --max-iterations 1


Enter fullscreen mode Exit fullscreen mode

Try and add more tests to this project by running this command at the root of this repository:



poetry run cover-agent \
  --source-file-path "cover_agent/main.py" \
  --test-file-path "tests/test_main.py" \
  --code-coverage-report-path "coverage.xml" \
  --test-command "poetry run pytest --junitxml=testLog.xml --cov=templated_tests --cov=cover_agent --cov-report=xml --cov-report=term --log-cli-level=INFO" \
  --coverage-type "cobertura" \
  --desired-coverage 70 \
  --max-iterations 1 \
  --openai-model "gpt-4o"


Enter fullscreen mode Exit fullscreen mode

Note: If you are using Poetry then use the poetry run cover-agent command instead of the cover-agent run command.

Outputs

A few debug files will be outputted locally within the repository (that are part of the .gitignore)

  • generated_prompt.md: The full prompt that is sent to the LLM
  • run.log: A copy of the logger that gets dumped to your stdout
  • test_results.html: A results table that contains the following for each generated test:

  • Test status.

  • Failure reason (if applicable).

  • Exit code,

  • stderr.

  • stdout.

  • Generated test.

Development

This section discusses the development of this project.

Versioning
Before merging to main make sure to manually increment the version number in cover_agent/version.txt at the root of the repository.

Running Tests
Set up your development environment by running the poetry install command as you did above.

Note: for older versions of Poetry you may need to include the --dev option to install Dev dependencies.

After setting up your environment run the following command:



poetry run pytest --junitxml=testLog.xml --cov=templated_tests --cov=cover_agent --cov-report=xml --cov-report=term --log-cli-level=INFO


Enter fullscreen mode Exit fullscreen mode

This will also generate all logs and output reports that are generated in .github/workflows/ci_pipeline.yml.

Getting Started with Testing

To demonstrate exactly how CodiumAI Cover-Agent functions, we will begin with an easy example. We are going to build a simple calculator.py file with functions for addition, subtraction, multiplication, and division.



# calculator.py

class Calculator:
    """Simple calculator class to perform basic arithmetic operations."""

    @staticmethod
    def add(a, b):
        """Return the sum of a and b."""
        return a + b

    @staticmethod
    def subtract(a, b):
        """Return the difference between a and b."""
        return a - b

    @staticmethod
    def multiply(a, b):
        """Return the product of a and b."""
        return a * b

    @staticmethod
    def divide(a, b):
        """Return the quotient of a and b.

        Raise ValueError if b is zero.
        """
        if b == 0:
            raise ValueError("Cannot divide by zero")
        return a / b


Enter fullscreen mode Exit fullscreen mode

Next step, Lets write a test file test_calculator.py and place it in the tests folder.



# tests/test_calculator.py
import pytest
from calculator import add, subtract, multiply, divide


class TestCalculator:

    @pytest.mark.parametrize("a, b, expected", [
        (2, 3, 5),
        (-1, 1, 0),
        (0, 0, 0),
        (100, 200, 300)
    ])
    def test_add(self, a, b, expected):
        assert add(a, b) == expected

    @pytest.mark.parametrize("a, b, expected", [
        (5, 3, 2),
        (0, 1, -1),
        (100, 50, 50),
        (-1, -1, 0)
    ])
    def test_subtract(self, a, b, expected):
        assert subtract(a, b) == expected

    @pytest.mark.parametrize("a, b, expected", [
        (2, 3, 6),
        (-1, 1, -1),
        (0, 5, 0),
        (100, 2, 200)
    ])
    def test_multiply(self, a, b, expected):
        assert multiply(a, b) == expected

    @pytest.mark.parametrize("a, b, expected", [
        (6, 3, 2),
        (-10, 2, -5),
        (100, 4, 25),
        (5, 2, 2.5)
    ])
    def test_divide(self, a, b, expected):
        assert divide(a, b) == expected

    def test_divide_by_zero(self):
        with pytest.raises(ZeroDivisionError):
            divide(1, 0)


Enter fullscreen mode Exit fullscreen mode

To view the test coverage, we must first install pytest-cov, a pytest extension that reports coverage.



pip install pytest-cov


Enter fullscreen mode Exit fullscreen mode

Run the coverage analysis with the below command:



pytest --cov=calculator


Enter fullscreen mode Exit fullscreen mode

Result:
The test result can be found below:



============================= test session starts ==============================
platform darwin -- Python 3.9.1, pytest-6.2.1, py-1.10.0, pluggy-0.13.1
rootdir: /path/to/your/project
collected 17 items                                                             

tests/test_calculator.py ................Q                                

============================== warnings summary ===============================
tests/test_calculator.py::TestCalculator::test_divide_by_zero
  /path/to/your/project/tests/test_calculator.py:34: PytestWarning: test_divide_by_zero raised in test (ZeroDivisionError: division by zero)

-- Docs: https://docs.pytest.org/en/stable/warnings.html
====================== 16 passed, 1 warning in 0.08s =======================


Enter fullscreen mode Exit fullscreen mode

This above result shows all tests passing as expected, including handling the division by zero case correctly with a warning indicating the raised exception. If any tests had failed, pytest would provide detailed information on the failures, including stack traces and the nature of the assertion failures.

Using CodiumAI Cover-Agent

To set up Codium Cover-Agent, follow the steps below:

Install Cover-Agent:



pip install git+https://github.com/Codium-ai/cover-agent.git


Enter fullscreen mode Exit fullscreen mode

Ensure OPENAI_API_KEY is set in your environment variables, as it is required for the OpenAI API.

Create the command to start generating tests:



cover-agent \
--source-file-path "calculator.py" \
--test-file-path "tests/test_calculator.py" \
--code-coverage-report-path "coverage.xml" \
--test-command "pytest --cov=. --cov-report=xml --cov-report=term" \
--test-command-dir "./" \
--coverage-type "cobertura" \
--desired-coverage 80 \
--max-iterations 3 \
--openai-model "gpt-4o" \
--additional-instructions "Since I am using a test class, each line of code (including the first line) needs to be prepended with 4 whitespaces. This is extremely important to ensure that every line returned contains that 4 whitespace indent; otherwise, my code will not run."


Enter fullscreen mode Exit fullscreen mode

Command Explanation

  1. source-file-path: The file path containing the functions that need tests generated for them.
  2. test-file-path: The file path where the agent will write the tests. It’s recommended to create a basic structure in this file with at least one test and the necessary import statements.
  3. code-coverage-report-path: The file path where the code coverage report will be saved.
  4. test-command: The command used to run the tests (e.g., pytest).
  5. test-command-dir: The directory where the test command should be executed. Set this to the root directory or the main file location to avoid issues with relative imports.
  6. coverage-type: The type of coverage tool to use. Cobertura is a recommended default.
  7. desired-coverage: The coverage percentage goal. A higher percentage is better, but achieving 100% is often impractical.
  8. max-iterations: The number of retries the agent should attempt to generate test code. More iterations can increase OpenAI token usage.
  9. additional-instructions: Prompts to ensure the code adheres to specific requirements, such as formatting the code to work within a test class.

After running the command above, it will generate the code below:



    # Tests the addition function with negative input values
    @pytest.mark.parametrize("a, b, expected", [
        (-5, -3, -8),
        (-10, -20, -30),
        (-1, -1, -2),
        (-100, -50, -150)
    ])
    def test_add_negative_values(self, a, b, expected):
        assert add(a, b) == expected

    # Tests the multiplication function with edge case of one input being zero
    @pytest.mark.parametrize("a, b, expected", [
        (0, 5, 0),
        (10, 0, 0),
        (0, 0, 0),
        (-5, 0, 0)
    ])
    def test_multiply_by_zero(self, a, b, expected):
        assert multiply(a, b) == expected

    # Tests the addition function with negative input values
    @pytest.mark.parametrize("a, b, expected", [
        (-5, -3, -8),
        (-10, 5, -5),
        (-20, -30, -50),
        (-1, -1, -2)
    ])
    def test_add_negative_values(self, a, b, expected):
        assert add(a, b) == expected
    # Tests the multiplication function with zero as one of the input values
    @pytest.mark.parametrize("a, b, expected", [
        (0, 5, 0),
        (10, 0, 0),
        (0, 0, 0),
        (-5, 0, 0)
    ])
    def test_multiply_by_zero(self, a, b, expected):
        assert multiply(a, b) == expected


Enter fullscreen mode Exit fullscreen mode

You can see that the agent also wrote tests for checking errors for edge cases, We can now test the coverage again by using the command below:



pytest --cov=calculator


Enter fullscreen mode Exit fullscreen mode

Result:



============================= test session starts ==============================
platform linux -- Python 3.8.10, pytest-7.1.2, py-1.11.0, pluggy-0.13.1
rootdir: /path/to/your/project
plugins: codiumai-1.0.0
collected 17 items                                                             

tests/test_calculator.py ................Q                                

============================== warnings summary ===============================
tests/test_calculator.py::TestCalculator::test_divide_by_zero
  /path/to/your/project/tests/test_calculator.py:34: PytestWarning: test_divide_by_zero raised in test (ZeroDivisionError: division by zero)

-- Docs: https://docs.pytest.org/en/stable/warnings.html
====================== 16 passed, 1 warning in 0.07s =======================


Enter fullscreen mode Exit fullscreen mode

Image description

In this straightforward example, we achieved 100% test coverage, for more explanations on bigger code base, you can read on these resources below:
Devto 1
CodiumAI
Praise

Conclusion

Don't settle for shaky foundations! Stop letting untested code lurk in the shadows. Embrace Cover-Agent and watch your code coverage soar. With its AI-powered automation and focus on quality, Cover-Agent is the key to building robust, reliable software that inspires confidence. Start your free trial today and experience the difference Cover-Agent can make in your development process!

Past article on other CodiumAI tools

You can check out the open-source repository for the cover agent at CodiumAI

Top comments (0)