DEV Community

Akriti Chanana
Akriti Chanana

Posted on • Edited on

Push JUnit Test Results to ELK stack

JUnit-style XML format is widely used by test automation team to generate reports . There are various testing frameworks/tools like readyAPI, TestNG, Cucumber etc. which are capable of generating JUnit-style XML reports. Moreover, this standard
reports can also be utilized and integrate with Jenkins to visualize beautiful reports.

What is ELK stack?

ELK is a stack consist of majorly three components Elasticsearch, Logstash, and Kibana. Let’s take a look at quick summary of each.

Elasticsearch: An open-source search and analytics engine used for analyzing logs and metrics.
Logstash: An open-source tool for centralized logging, ingesting and transforming logs and events.
Kibana: An exploration tool for reviewing logs and beautiful data visualizations.

Why do we need ELK?

It is very advantageous for diagnosing and resolving defects and production issues, but it’s also highly valuable for customer insights. Moreover, it provides additional metrics about the health and usage of systems which supplies QE stakeholders a powerful advantage and transparency regarding test automation and to pinpoint where they’re falling short, your can examine the status of data, adapt, and deliver just what your system needs.

For installing and configuring Elasticsearch and Kibana refer to
https://www.elastic.co

POST Results to Elasticsearch

Prerequisites

Python Libraries

  • requests
  • junitparser

Mandatory Arguments

  • xmlPath = "Path to your xml report"
  • URL = "API Endpoint of Elasticsearch"
script_name = jUnitResultstoELK.py
#!/usr/bin/env python
import sys
import requests
from junitparser import JUnitXml
from datetime import datetime, timedelta


# No.of total tests
def Total_Tests():
    Total_Tests = 0
    try:
        for suites in xml:
            Total_Tests += suites.tests
        return (Total_Tests)
    except AttributeError:
        Total_Tests += xml.tests
        return (Total_Tests)

#No. of tests failed
def Total_Failures():
    Total_Failures = 0
    try:
        for suites in xml:
            Total_Failures += suites.failures
        return (Total_Failures)
    except AttributeError:
        Total_Failures += xml.failures
        return (Total_Failures)

#Total duration
def Total_time():
    Total_time = 0
    try:
        for suites in xml:
            Total_time += suites.time
        return (Total_time)
    except AttributeError:
        Total_time += xml.time
        return (Total_time)


if __name__ == '__main__':
    URL = sys.argv[1]
    xmlPath = sys.argv[2]
    xml = JUnitXml.fromfile(xmlPath)
    totalTests = Total_Tests()
    totalFail = Total_Failures()
    totalPass = totalTests - totalFail
    start_time = datetime.now().strftime('%Y-%m-%dT%H:%M:%S.%f')
    end_time = datetime.now() + timedelta(seconds=Total_time())
    end_time = end_time.strftime('%Y-%m-%dT%H:%M:%S.%f')

    data = {
        'total_tests': totalTests,
        'total_pass': totalPass,
        'total_fail': totalFail,
        'end_time': end_time,
        'start_time': start_time
    }
    try:
        r = requests.post(url=URL, data=data, verify=False)
        print(r.text)
    except requests.exceptions.Timeout:
        print(TimeoutError)
    except requests.exceptions.TooManyRedirects:
        print("Too many redirect, please retry")
    except requests.exceptions.RequestException as e:
        raise SystemExit(e)

Also, as parameters name can be different in each organization so you can change the data {} parameters accordingly.

How to run?

python jUnitResultstoELK.py "xmlPath" "URL"

Integrate with Jenkins

pipeline {
    agent{
        label 'master'
    }
    stage('Push Results to Elastic') {
        steps {
            sh 'python jUnitResultstoELK.py "${WORKSPACE}/reports/report.xml" "http://localhost:8788"'
        }
    }

}

Top comments (1)

Collapse
 
okainov profile image
Oleg Kainov

... no examples of the visualization... no details why such metric (total pass/fail and time) was selected (and not drill-down into tests details, for example).
That's absolutely not helpful, sorry.