Analyze test result history
This document explains how to analyze past test result trends across your project.
Overview​
Analyzing test result history is crucial for assessing testing quality and ensuring release confidence. It helps teams:
- Detect abnormal spikes or shifts in failure/blocked rates early
- Identify flaky or consistently failing tests to prioritize fixes. See Investigate flaky tests to learn more.
- Compare manual vs. automated execution results
- Trace regressions to specific intervals, releases, or branches
- Provide evidence for release readiness and stakeholder communication
Steps to analyze test result history​
In Katalon True Platform, you can access the Test Results Analysis Report through multiple routes:
-
Via the Analytics & Trends dashboard: the Test Execution Results Trend widget can be expanded to navigate to the Test Results Analysis Report.
-
Via Analytics > Reports > Test Results Analysis Report.
Once you've accessed the report, follow these steps to analyze test result history.
Step 1: Configure scope and intervals​
- Choose Project and Time Range. For short windows use daily grouping; for multi-week or release views use weekly grouping.
- Filter by Execution Type (
Manual/Automated), Release/Sprint, Tester, Platform or Configuration to focus the analysis.
This leads to a focused dataset feeding the trend and distribution visuals so you can spot meaningful patterns.
Step 2: Inspect trend & distribution visuals​
Once data is scoped and filtered to address your concern, spot for patterns or signals:
- Rising failure slope: a steady upward trend in
Failed/Error/Skippedover several intervals (days/weeks) often indicates a regression in the product or a growing set of brittle tests. - Sudden spikes in
Failed/Error/Skippedcounts: one-off or burst spikes often point to a new defect, infra outage, or test environment misconfiguration. - High failure concentration by some scope: failures concentrated by Tester, Platform (OS/browser), Release/Branch, or Execution Type (
ManualvsAutomated) may require targeted investigations. Automated/Manualconflicting results trend: if automated runs show a rising failure rate while manual runs remain stable (or vice versa), this signals a difference in automation quality and stability.
Step 3: Drill down to runs and tests​
- Click a data point or segment to filter the detail table for that interval.
- In the detail table, inspect each run's composition (counts of
Passed/Failed/Skipped/Error) and the run duration. - Use result badges and the test history link to open the Test Result or Test Run Details page for step-level logs, screenshots, and attachments.
- Export filtered results to CSV or capture a report snapshot to share with stakeholders.
Turn trends and distributions into explanations by asking Katalon AI Assistant what the visuals suggest.