Analyze test result history
This document explains how to analyze past test result trends across your project.
Overview​
Analyzing test result history is crucial for assessing testing quality and ensuring release confidence. It helps teams:
- Detect abnormal spikes or shifts in failure/blocked rates early
- Identify flaky or consistently failing tests to prioritize fixes. See Investigate flaky tests to learn more.
- Compare manual vs. automated execution results
- Trace regressions to specific intervals, releases, or branches
- Provide evidence for release readiness and stakeholder communication
Steps to analyze test result history​
In TestOps, you can access the Test Results Analysis Report through multiple routes:
-
Via the Analytics & Trends dashboard: the Test Execution Results Trend widget can be expanded to navigate to the Test Results Analysis Report.
-
Via Analytics > Reports > Test Results Analysis Report.
Once you've accessed the report, follow these steps to analyze test result history.
Step 1: Configure scope and intervals​
- Choose Project and Time Range. For short windows use daily grouping; for multi-week or release views use weekly grouping.
- Filter by Execution Type (Manual / Automated), Release/Sprint, Tester, Platform/Configuration, or Branch to focus the analysis.
What this leads to: a focused dataset feeding the trend and distribution visuals so you can spot meaningful deviations.
Step 2: Inspect trend & distribution visuals​
Once data is scoped and filtered to address your concern, spot for patterns or signals:
- Rising failure slope: a steady upward trend in Failed / Error / Skipped over several intervals (days/weeks) often indicates a regression in the product or a growing set of brittle tests.
- Sudden spikes in Failed / Error / Skipped counts: one-off or burst spikes often point to a new defect, infra outage, or test environment misconfiguration.
- High failure concentration by some scope: failures concentrated by Tester, Platform (OS/browser), Release/Branch, or Execution Type (Manual vs Automated) may require targeted investigations.
- Automated/Manual conflicting results trend: if automated runs show a rising failure rate while manual runs remain stable (or vice versa), this signals differences in test coverage, timing, or environment parity.
Step 3: Drill down to runs and tests​
- Click a data point or segment to filter the detail table for that interval.
- In the detail table, inspect each run's composition (counts of Passed / Failed / Skipped / Error ) and the run duration.
- Use result badges and the test history link to open the Test Result or Test Run Details page for step-level logs, screenshots, and attachments.
- Export filtered results to CSV or capture a report snapshot to share with stakeholders.