Automated Analysis of Test Script failures using Machine Learning - Digital Solutions, IT Services & Consulting - Payoda

Automated Analysis of Test Script failures using Machine Learning

The aim of this blog is to propose a theoretical model/workflow for a test automation report analysis tool that can perform an intelligent and automated analysis of failed scripts and provide a detailed, insightful report to significantly reduce the manual effort involved in this activity.

Problem Statement

There is no denying that analyzing the root cause of automation failures is a time-consuming task. Let us assume there’s a test suite with 100 test methods that are run on a daily basis and on average there are 20 failures per run. Even if we consider a mere 10 minutes to analyze each failure, it would cost us 3hrs/days. That’s around 65 hrs/month. There could be multiple causes of failures such as environmental issues, application issues, data issues, and script issues, which need to be identified and confirmed.

Lack of historical data might hamper us from getting a clearer perspective during manual analysis. So to save testers from all this exhausting redundancy, we propose the idea of using Machine Learning algorithms to automatically and accurately analyze and report the reasons for automation failures.

Automated Report Analysis — Overview

The proposed Automated Report Analysis tool comprises of two parts.

  1. Parser (where Machine learning logic should be implemented to auto classify script failures).
  2. Portal (Dashboard to display report graphs, test runs and other meaningful historical data).
  • The Report Analysis portal provides various analytical data of a series of automation runs (from Jenkins/Bamboo).
  • The tool should be capable of supporting any automation framework as long as the report produced is in the form of XML/HTML.
  • This automated analysis should provide the historical trend for each script and also map failures to bugs, in case of failures being caused by existing issues that are tracked in the TrainingData file.
  • Reduces manual effort on report consolidation, failure analysis, trend analysis, etc. The portal should also allow the export of details to Excel/PDF.

How does this tool work?

  • Jenkins calls the Parser logic.
  • Parser logic reads the data from the TrainingData excel file placed in the Jenkins workspace and stores it in the database.
  • It uses Artificial Intelligence logic (with Machine learning) for failure classification.

What does the Parser do?

Parser will process the automation test results & stores them in the database.

  • It contains a jar file that needs to be configured as a job to the Jenkins server (CI/CD).
  • This Jenkins job will be called from the post-build action section of the test suite job in Jenkins.

Parser–Jenkins Configuration

  • Create a job in Jenkins named “Report Analysis”
  • Place the “Report_Portal.jar” & “TrainingData.xls” in the “Report Analysis” job’s workspace.
  • Choose “Execute Windows batch command” from the “Add Build Step” from the “Build” section of the “Report Analysis” job configuration page.
  • Add the command “java –jar Report_Portal.jar” in the “Execute Windows batch command” section.
  • Save the Jenkins job.

About Training Data

For the Report Analysis tool to automatically classify bugs, we should train the ML algorithm with a good volume of a variety of data regarding causes of failures and the existing bugs in the applications that are capable of causing failures.

1. Environmental Issue

2. Application Issue

3. Script Issue (logic, identifier issues)

4. Data Issue

5. Existing Application

So in the TraningData.xls file, we need to provide the data with keywords for the above failures.

For example: For failures due to existing bugs, educate the tool with those specific keywords that indicate that the failure was caused due to a specific bug that is still prevalent in the automation environment, along with the JIRA bug ID. So when the Report Analysis Jenkins job is run, the AI logic (Parser) analyses the consolidated test suite report HTML wherein it looks for the keywords related to failures caused by existing bugs and classifies the failure accordingly.

The same is applicable for the other causes of failures too. Basically, there should be two Jobs in Jenkins. The first one executes the test suite and the second executes the Automated Report Analyser which consumes the report from the previous job and produces a smarter report with a clear classification of failures.

Graphical User Interface

For the easy usage of the tool, we propose that a GUI be also created.

  1. Project Name
  2. Application Name
  3. Environment Name
  4. Jenkins Build No#
  5. Test script/ Test method name
  6. Failure log
  7. Classification of Failure (options are customizable, but here, we take the above listed 5 types.

All the above details, except for point #7, are available in the respective Jenkins job and the parser logic should capture these details and display them in the portal. Other features of the proposed portal are listed below:

Latest Report: Displays the report of the most recently executed Jenkins job. Users can select the job from a drop-down. A total of 15 previous jobs and their reports can be made to be stored on the portal.

Recent Trends Report: Consolidates and presents a view of the last 5 executions. Clubs each test method from those 5 executions into a group and projects the failures classification in each execution. The categories that each test method can fall into are Passed, Not Executed or Skipped, Automatically Analysed (automatically analyzed and failure classified), To be Investigated (failures which need to be manually investigated).

Our aim with this tool is to progressively bring down the percentage of To be Investigated failures by fine-tuning the ML logic, continuous training, testing, and keeping the TrainingData file updated.

Run Details: Shows details of a specific test suite execution with the Job Number, Start Time, End Time, Duration, Total Cases Executed, Passed, Not Executed, and Failure Classifications such as Application Issue, Script Issue, Data Issue, Existing Defect, and To be Investigated. We can filter the reports by selecting jobs, date of execution, failure type, etc. Initially, the last 15 days’ reports or the last 15 jobs (whichever is smaller in number) are loaded into the portal. Users can perform a search using job number, date, failure type, etc.

Each failed test method is categorized into a failure type based on the information provided during training and the data present in the TrainingData excel sheet.

Developing the above tool will require an investment of time, effort, and further ideation, but the great advantage we see with it is, once implemented, it will help to avoid the redundant effort spent on analyzing failures with each suite execution. It also presents a clear picture regarding the historical trends and the stability of each script. This portal should be the framework, application, operating system, and CI/CD platform-independent to reduce further effort and be envisioned as a one size fits all solution.

Continuous training of the ML algorithm is necessary in order to make the logic foolproof and achieve the final goal of completely automating the failure analysis of automation scripts.

Leave a Reply

Your email address will not be published. Required fields are marked *

4 × 1 =