Skip to main content

Test Case Status Analysis Report

This document explains how to use the Test Case Status Analysis report to monitor overall test case execution health and prioritize investigation on failed / unstable test cases.

The Test Case Status Analysis Report provides a high-level overview of test execution health in Katalon TestOps.
It helps QA teams monitor pass/fail trends, identify unstable test cases, and prioritize investigation into failed or errored runs.

Overview of the Test Case Status Analysis Report dashboard in TestOps

Why Use This Report​

This report helps pinpoint flaky automation scripts and highlight regression weaknesses revealed through manual testing.

  • Evaluate overall testing health: Understand the distribution of passed, failed, error, and skipped test cases.
  • Identify risk areas quickly: Detect failed or errored cases that may impact release readiness.
  • Prioritize QA efforts: Focus on unstable or blocked cases to improve quality and execution reliability.
  • Assess automation effectiveness: Compare pass rate between manual and automated tests to identify efficiency opportunities.

Explore the Report​

  1. Open the report: Go to Reports > Test Case Status Analysis.
  2. Set the analysis scope:
    • Choose a Date Range or Sprint/Release period.
    • Optionally, filter by Author, Test Type, or Latest Status to refine your view.
  3. Review visual summaries:
    • Examine the pie chart to see how many tests passed, failed, errored, or were skipped.
    • Check pass rate breakdown to compare automation vs manual performance.
    • Note any imbalance indicating recurring test or environment issues.
  4. Drill into details:
    • Scroll down to the data table for individual test cases.
    • Click a Test Case ID to open its execution history and investigate root causes.
    • Sort by Status or Last Executed to focus on recent or problematic tests.
  5. Focus your analysis:
    • Use Latest Status filters to isolate failed or errored tests for triage.
    • Combine filters (e.g., β€œFailed automated tests by Author A this week”) to target high-impact issues.
  6. Take action:
    • Review linked test case pages for logs, execution context, and related runs.
    • Assign follow-up or re-run tasks to validate fixes.
    • Use the insights to guide automation stability improvements or test maintenance priorities.
Drilling down into a specific test case for execution details

Report Features​

The Test Case Status Analysis Report consolidates multiple dimensions of test result data into an interactive view with three key components:

  • Execution Status Distribution – a pie chart summarizing pass/fail/error/skip ratios
  • Pass Rate Breakdown – a comparative metric by test type (Automation vs Manual)
  • Test Case Detail Table – a detailed view of individual test cases, with filters for deeper analysis

Execution Status Distribution​

A pie chart showing the proportion of test cases by their latest execution result (Passed, Failed, Error, Skipped).

Provides a quick visual summary of overall testing health. Highlights where most issues occur.

Pie chart showing the distribution of test case execution result

Pass Rate by Test Type​

A metric block comparing pass rates for Automated vs Manual tests. It helps assess automation reliability and identify where manual intervention is most frequent.

Pass rate comparison between automated and manual tests

Test Case Detail Table​

A sortable, filterable data table listing key test case attributes such as:

  • Test Case ID and Name
  • Type (Automated / Manual)
  • Executor
  • Last Run Time
  • Latest Execution Status
Detailed list of individual test cases with execution outcomes

Reference​

TermDefinition
Pass RateRatio of passed test cases to total executed test cases in the selected scope
Latest Execution StatusThe most recent outcome recorded for each test case
Automation vs ManualClassification based on whether the test was executed automatically or by a tester
Error StatusIndicates execution interruptions due to exceptions, timeouts, or system-level issues
Skipped StatusMarked when dependent setup steps or preconditions were not met

Metric Calculations​

πŸ’‘ All metrics are calculated based on the latest execution status within the selected date range or sprint.

MetricCalculation Method
Pass Rate (%)(Number of Passed Test Cases Γ· Total Executed Test Cases) Γ— 100
Failure Rate (%)(Number of Failed Test Cases Γ· Total Executed Test Cases) Γ— 100
Error Rate (%)(Number of Error Test Cases Γ· Total Executed Test Cases) Γ— 100
Skipped Rate (%)(Number of Skipped Test Cases Γ· Total Executed Test Cases) Γ— 100
Was this page helpful?