Using Artificial Intelligence for Analysis of Automated Testing Results
Briefly

In his presentation at QA Challenge Accepted, Maroš Kutschy emphasized the importance of analyzing automated test results, noting the challenges posed by the volume of tests. With around 4,000 tests run nightly and a 5% failure rate, manual analysis is burdensome. To address this, Kutschy introduced ReportPortal, an AI-driven tool that streamlines the analysis process by categorizing failures. Testers can ascertain if failures are due to product bugs, automation issues, or environmental factors. This tool aims to minimize human error, helping teams focus on addressing new failures promptly.
If you have around 4000 test scenarios running each night and if around 5% of them are failing, you need to analyse around 200 failures each day.
ReportPortal shows the output of the analysis; you can see how many scenarios failed because of product bugs, automation bugs, environment issues and how many failures are still in 'To Investigate' status.
I am the administrator, I did the proof of concept and the integration and I solved any issues. Colleague testers who work in feature teams are using it on a daily basis.
When you start using the tool, it knows nothing about the failures, Kutschy said. Testers need to decide if the failure is a product bug, automation bug, or environmental issue.
Read at InfoQ
[
|
]