Test Case Status Analysis Report
This document explains how to use the Test Case Status Analysis report to monitor overall test case execution health and prioritize investigation on failed / unstable test cases.
The Test Case Status Analysis Report provides a high-level overview of test execution health in Katalon TestOps.
It helps QA teams monitor pass/fail trends, identify unstable test cases, and prioritize investigation into failed or errored runs.
Why Use This Reportβ
This report helps pinpoint flaky automation scripts and highlight regression weaknesses revealed through manual testing.
- Evaluate overall testing health: Understand the distribution of passed, failed, error, and skipped test cases.
- Identify risk areas quickly: Detect failed or errored cases that may impact release readiness.
- Prioritize QA efforts: Focus on unstable or blocked cases to improve quality and execution reliability.
- Assess automation effectiveness: Compare pass rate between manual and automated tests to identify efficiency opportunities.
Explore the Reportβ
- Open the report: Go to Reports > Test Case Status Analysis.
- Set the analysis scope:
- Choose a Date Range or Sprint/Release period.
- Optionally, filter by Author, Test Type, or Latest Status to refine your view.
- Review visual summaries:
- Examine the pie chart to see how many tests passed, failed, errored, or were skipped.
- Check pass rate breakdown to compare automation vs manual performance.
- Note any imbalance indicating recurring test or environment issues.
- Drill into details:
- Scroll down to the data table for individual test cases.
- Click a Test Case ID to open its execution history and investigate root causes.
- Sort by Status or Last Executed to focus on recent or problematic tests.
- Focus your analysis:
- Use Latest Status filters to isolate failed or errored tests for triage.
- Combine filters (e.g., βFailed automated tests by Author A this weekβ) to target high-impact issues.
- Take action:
- Review linked test case pages for logs, execution context, and related runs.
- Assign follow-up or re-run tasks to validate fixes.
- Use the insights to guide automation stability improvements or test maintenance priorities.
Report Featuresβ
The Test Case Status Analysis Report consolidates multiple dimensions of test result data into an interactive view with three key components:
- Execution Status Distribution β a pie chart summarizing pass/fail/error/skip ratios
- Pass Rate Breakdown β a comparative metric by test type (Automation vs Manual)
- Test Case Detail Table β a detailed view of individual test cases, with filters for deeper analysis
Execution Status Distributionβ
A pie chart showing the proportion of test cases by their latest execution result (Passed, Failed, Error, Skipped).
Provides a quick visual summary of overall testing health. Highlights where most issues occur.
Pass Rate by Test Typeβ
A metric block comparing pass rates for Automated vs Manual tests. It helps assess automation reliability and identify where manual intervention is most frequent.
Test Case Detail Tableβ
A sortable, filterable data table listing key test case attributes such as:
- Test Case ID and Name
- Type (Automated / Manual)
- Executor
- Last Run Time
- Latest Execution Status
Referenceβ
| Term | Definition |
|---|---|
| Pass Rate | Ratio of passed test cases to total executed test cases in the selected scope |
| Latest Execution Status | The most recent outcome recorded for each test case |
| Automation vs Manual | Classification based on whether the test was executed automatically or by a tester |
| Error Status | Indicates execution interruptions due to exceptions, timeouts, or system-level issues |
| Skipped Status | Marked when dependent setup steps or preconditions were not met |
Metric Calculationsβ
π‘ All metrics are calculated based on the latest execution status within the selected date range or sprint.
| Metric | Calculation Method |
|---|---|
| Pass Rate (%) | (Number of Passed Test Cases Γ· Total Executed Test Cases) Γ 100 |
| Failure Rate (%) | (Number of Failed Test Cases Γ· Total Executed Test Cases) Γ 100 |
| Error Rate (%) | (Number of Error Test Cases Γ· Total Executed Test Cases) Γ 100 |
| Skipped Rate (%) | (Number of Skipped Test Cases Γ· Total Executed Test Cases) Γ 100 |