DEV Community

Cover image for Cyclomatic complexity and cognitive complexity
Cédric Teyton for Packmind

Posted on • Edited on

Cyclomatic complexity and cognitive complexity

Cyclomatic complexity and cognitive complexity are two common software metrics that often come together during discussions between developers, especially during tuning of code analysis tools. We'll expose in this post the differences between both and how to use them.

Cyclomatic complexity

Most often computed on methods or functions, it indicates the number of possible execution paths. It was first developed by Thomas J. McCabe, Sr. in 1976. You'll find more details on how this metric is computed here. The idea is to decompose your function into a graph with nodes (code instructions) and edges (a path between two nodes).

Let's take a simple example there :

int computeTax(String countryCode){
  if (countryCode.equals("FR")) 
     return 20;
  else
     return 0;
}

List<Integer> findCommonNumbers(List<Integer> sourceList, List<Integer> candidateList) {           
    List<Integer> result = new ArrayList<>();
    for (int i = 0 ; i < sourceList.size() ; i++) {    
        for (int j = 0 ; j < candidateList.size() ; j++) {
            if (sourceList.get(i) == candidateList.get(j)) {
                 result.add(source1.get(i));
            }
        }  
    }
    return result;
}
Enter fullscreen mode Exit fullscreen mode

The CC is 2 for the computeTax function. We indeed have the regular path and the 'if' condition (the 'else' has actually no effect).

The CC is 4 for the findCommonNumbers function : 1 + for, for, if.

Cognitive complexity

This metric indicates how much it's difficult for a human to understand the code and all its possible paths. Cognitive complexity will give more weight to nested conditions as it's supposed to be harder to read. If we consider our previous example, we get a cognitive complexity of 6.

List<Integer> findCommonNumbers(List<Integer> sourceList, List<Integer> candidateList) {           
    List<Integer> result = new ArrayList<>();
    for (int i = 0 ; i < sourceList.size() ; i++) {    //+1
        for (int j = 0 ; j < candidateList.size() ; j++) { //+2 (nested level = 2)
            if (sourceList.get(i) == candidateList.get(j)) {  //+3 (nested level = 3)            
                    result.add(source1.get(i)); 
            }  
    }
    return result;
}
Enter fullscreen mode Exit fullscreen mode

There is also a major difference in some structures like switch/case. In this example :

int getCountryTax(String country) {
    switch (country) {          
      case "FR":
        return 20;
      case "EN":
        return 10;
      case "US":
        return 5;
      default:
        return 0;
    }
}
Enter fullscreen mode Exit fullscreen mode

We'll get a Cyclomatic complexity of 4, while we have a Cognitive Complexity of 1 only. This second metric estimates that the code is not harder to understand if we have 4 or 7 cases in our switch.

You can find more details on this metric here, along with its motivations and the issues it tries to resolve.

Interpretations

Both metrics stand as code smells in case they reach a given threshold (often 10 or 15). Beyond these values, functions tend to be difficult to test and maintain and are thus good candidates for a redesign or refactoring. They're used to warn developers that some pieces of code should be looked at carefully to avoid introducing complexity in the existing codebase. Tools like SonarQube are helpful to compute such metrics.

You should keep in mind that both metrics are independent of the number of lines of code in your function. If you have 100 consecutive statements with no branches (conditions, loops, ...), you'll get a value of 1 for both of them. Also, it does not consider your coding style and formatting rules. So following Clean Code principles and setting a meaningful variable name won't affect anything.

Be careful with metrics

At Promyze, it's very common that we discuss with our customers about software metrics and they should be set and used in their organization. We like to quote Goodhart's law :

"When a measure becomes a target, it ceases to be a good measure."

The best example is the code coverage metric. We've met plenty of developers who told us about projects where they had to write a bunch of dummy tests just to increase code coverage (for instance, testing getters and setters). Maybe you've faced this situation already...

Similarly, going below a cyclomatic or cognitive complexity level should not be the main target of developers, since it may generate self-sufficiency and biased satisfaction: "Metrics are good enough so my source code is definitely great". As we said previously, those metrics work wells a fine-level (functions), but can't judge the data structure of your code architecture. This should be a support.

Avoid complexity with TDD

A common pitfall is to use these metrics after the coding step, to evaluate whether our design fits our standard. There are plenty of ways to avoid complexity in your code. Clean Code principles are a great recipe book of course, but you can also take a look at Test-Driven Development (TDD).

By nature, it guides developers to write minimal code, functions with a single purpose, and above all code covered by tests. In the TDD cycle, continuous refactoring prevents code complexity and reduces the risks of large functions with many branches inside. The code won't get hard to test if it's designed to be tested.

Top comments (0)