DEV Community

Vlad
Vlad

Posted on • Updated on

Running JMH benchmark from Eclipse

What issue I'm trying to solve

A few months ago, we started to use JMH in our project to test and find performance issues.
The tool provides multiple modes and profilers, and we found this useful for our purposes.

Intellij IDEA, which I'm using, has a useful Intellij IDEA plugin for Java Microbenchmark Harness. The plugin's functionality is similar to JUnit Plugin. It simplifies benchmarks development and debugging by allowing running some benchmarks together or separately.

But... half of our team uses Eclipse as a major IDE and the IDE does not have any plugins or support for the tool.
Even if we may run it with the main method, it is inconvenient to change the include pattern and do not forget to revert the changes before committing them into git.

So, after small brainstorming, we decided to write a custom JUnit Runner with functionality to run benchmark.

JUnit 4 runners

JUnit 4 has API to run any class as a test suite.
All you need is only to:

  1. write class extending org.junit.runner.Runner class
public class BenchmarkRunner extends Runner {
    //...
}
Enter fullscreen mode Exit fullscreen mode
  1. implement constructor and few methods
public class BenchmarkRunner extends Runner {
   public BenchmarkRunner(Class<?> benchmarkClass) {
   }

   public Description getDescription() {
       //...
   }  

   public void run(RunNotifier notifier) {
       //...
   }
}
Enter fullscreen mode Exit fullscreen mode
  1. add the runner to your test class
@RunWith(BenchmarkRunner.class)  
public class CustomCollectionBenchmark {
    //...
}  
Enter fullscreen mode Exit fullscreen mode

Implementing Benchmark JUnit runner

First, we need to provide information about tests to the JUnit engine.

JUnit runners have the getDescription() method for it. But how to get information about test classes and test methods?

From JavaDoc:

When creating a custom runner, in addition to implementing the abstract methods here you must also provide a constructor that takes as an argument the Class containing the tests.

So, we may get the class as a constructor argument and get all information with reflection's help.

public class BenchmarkRunner extends Runner {
    private final Class<?> benchmarkClass; // (1)  
    private final List<Method> benchmarkMethods; // (2) 

    public BenchmarkRunner(Class<?> benchmarkClass) {  
        this.benchmarkClass = benchmarkClass;  
        this.benchmarkMethods = Arrays.stream(benchmarkClass.getDeclaredMethods())  
                .filter(m -> m.isAnnotationPresent(Benchmark.class))  
                .collect(Collectors.toList());  
    }
    //...
}
Enter fullscreen mode Exit fullscreen mode

Now I have all the required information to provide a test suite description to the JUnit engine:

  • Actual benchmark class
  • All methods marked with @Benchmark in this class
public class BenchmarkRunner extends Runner {  
    //... 
    @Override  
    public Description getDescription() {  
        Description result = Description.createSuiteDescription(benchmarkClass);  
        benchmarkMethods.stream()  
                .map(m -> Description.createTestDescription(benchmarkClass, m.getName()))  
                .forEach(result::addChild);  
        return result;  
    }  
    //...
}
Enter fullscreen mode Exit fullscreen mode

Once we run the test we may see something like this:

Eclipse JUnit run

Let's run our benchmarks.
We need to implement the method org.junit.runner.Runner.run(RunNotifier) where RunNotifier is responsible to notify the engine about test runs.

The idea is we run sequentially one-by-one benchmark methods, each in separate org.openjdk.jmh.runner.Runner.

public class BenchmarkRunner extends Runner {

    ...

    @Override
    public void run(RunNotifier notifier) {
        for (Method benchmarkMethod : benchmarkMethods) {
            Description testDescription = getBenchmarkMethodDescription(benchmarkMethod);
            try {
                notifier.fireTestStarted(testDescription);
                Options opt = new OptionsBuilder()
                        .include(".*" + benchmarkClass.getName() + "." + benchmarkMethod.getName() + ".*")
                        .jvmArgsAppend("-Djmh.separateClasspathJAR=true")
                        .build();

                new org.openjdk.jmh.runner.Runner(opt).run();

                notifier.fireTestFinished(testDescription);
            } catch (Exception e) {
                notifier.fireTestFailure(new Failure(testDescription, e));
                return;
            }
        }
    }

    private Description getBenchmarkMethodDescription(Method benchmarkMethod) {
        return Description.createTestDescription(benchmarkClass, benchmarkMethod.getName());
    }
}
Enter fullscreen mode Exit fullscreen mode

Options mean follow:

  • include - benchmark we would like to include in the run.
  • jvmArgsAppend("-Djmh.separateClasspathJAR=true") - specific option, telling JMH to build classpath.jar to avoid the The filename or extension is too long error

As you may see, we notify RunNotifier when we start and finish our test, successful or not.
Looks good, but we are running all tests even when choosing to run only a single one.

Filtering

Our runner should implement the org.junit.runner.manipulation.Filterable interface to allow to JUnit engine to tell our code what tests should be run.
The interface has only a single method void filter(Filter) and org.junit.runner.manipulation.Filter argument has the shouldRun(Description) method we may use to know if the test requested to run.
The method doesn't return anything, so looks like we need to store filtered results and use them later.


public class BenchmarkRunner extends Runner implements Filterable {

    //...

    private List<Method> readyToRunMethods; // <= add new field to store filter result  

    @Override  
    public Description getDescription() {  
        Description result = Description.createSuiteDescription(benchmarkClass);  
        readyToRunMethods.stream()  // <= use the field here
                .map(this::getBenchmarkMethodDescription)  
                .forEach(result::addChild);  
        return result;  
    }  

    //...

    @Override  
    public void run(RunNotifier notifier) {  
        for (Method benchmarkMethod : readyToRunMethods) {  // <= and here
            //...  
        }  
    }  

    @Override  
    public void filter(Filter filter) throws NoTestsRemainException {  
        List<Method> filteredMethods = new ArrayList<>();  

        for (Method benchmarkMethod : benchmarkMethods) {  
            if (filter.shouldRun(getBenchmarkMethodDescription(benchmarkMethod))) {  
                filteredMethods.add(benchmarkMethod);  
            }  
        }  

        if (filteredMethods.isEmpty()) {  
            throw new NoTestsRemainException();  
        }  

        this.readyToRunMethods = filteredMethods;  
    }  
}
Enter fullscreen mode Exit fullscreen mode

Now it runs only methods we ask to run.

Conclusion

Finally, we've got unified way to run benchmarks from any IDE during development and debugging. This is really simplify our daily life.
We are still running the benchmarks with main method to reduce environment noise and get reliable results for analysis.

You may find the code in GitHub.

If you find the post helpful, please support me and:
Buy Me A Coffee

Top comments (0)