Test Suite and Test Suite Collection Reports

Test Suite Report

Starting from version 6.3.0, reports can be viewed directly inside each Test Suite page.

In Test Suite page, click on Result tab to view its details:

where:

ComponentDescription
Test Cases TableList of executed test cases.
SummarySummary information of executed environment.
Execution Settings

Settings of executed browsers/devices. For example:

Execution Environment

Other information about the executed system. For example:

Test Cases List

The summary information of all executed iterations done in the test suite is displayed here. Each time when a test case is executed with a test data row is considered an iteration.

Users can easily determine which type of information to be displayed by using the provided filters:

Filter Description
Passed Show only iterations which are passed.
Failed Show only iterations which are failed.
Error Show only iterations having errors.
Incomplete Show only incomplete iterations

By selecting an iteration in Test Case Table and click Show Test Case Details, you can view details regarding its executed logs.

If qTest and JIRA are configured in project settings, you can submit data to those systems. Refer to Enable qTest Integration and Configure JIRA Integration for more details.

Test Suite Summary

This section gives the summary information of the test suite:

where:

Field Description
Test Suite ID The ID of the executed test suite in Katalon Studio.
Hostname / OS / Platform The environment where the test suite was executed
Start / End / Elapse Execution start/end date time and duration
Total TC Total number of test cases, along with their executed status.

Test Logs Details

This section shows all information regarding the iteration selected in the Test Cases Table section.

Test Log Tab

Details regarding all the executed steps and their status are displayed in this tab.  where:

ComponentDescription
Log InformationInformation of the test step selected in the Test Case's Log section:
  • The Name of the test step (the name of the keyword used in the test step)
  • Execution Start/End date time and duration
  • The Description of the test step
  • Any system Message raised when the test step was executed
Log Image

The screenshot taken from the application under test, it is captured in either of following situations:

Users can easily determine which type of information to be displayed by using the provided filters:

Filter Description
Info Show the messages logged for information/reference.
Passed Show the steps which are successfully executed.
Failed Show the steps which are failed to execute.
Error Show the steps having errors.
Incomplete Show incomplete steps due to other factors such as wrong syntax, power shortage, disconnected network, etc...
Warning Show the steps which have warning status.
Not Run Show the skipped steps.

If JIRA is configured in project settings, you can submit a ticket to this system. Refer to Configure JIRA Integration for more details.

Screenshots are taken for the failed steps and you can hover the mouse cursor over the attachment icon to review.

Information Tab

Users can find the summary information of the test case in this tab.

where:

Field Description
Test Case ID The ID of the executed test case in Katalon Studio.
Start / End / Elapse Execution start/end date time and duration.
Description The description of the test case.
Message Any system message raised when this iteration was executed.

Integration Tab

The information regarding qTest Integration of this iteration is displayed in this tab.

where:

Field Description
Test Log ID The ID of the integrated qTest Test Run.
Test Run Alias The alias of the integrated qTest Test Run.
Attachment Indicate whether all the execution log and report are placed in a zipped file which is sent to qTest as an attachment.

Test Suite Collection Report

Starting in version 6.3.0, reports can be viewed directly inside each Test Suite Collection page.

Test Suite Collection Report are only available for Katalon Studio Enterprise users.

In Test Suite Collection page, click on Result tab to view its details:

where:

Field Description
ID The ID of the executed test suite in Katalon Studio.
Environment The environment which the test suite is executed on.
Status Information about whether the execution is completed or not.
Failed Tests / Total Total test cases in the test suite and the number of failed test cases if any.
Test Suite Details Click on this link to be redirected to detailed report of the test suite.

Report History

Report History is only available for Katalon Studio Enterprise users.

Once a test suite/test suite collection finishes its execution, a historical report will be automatically generated and stored in Reports.

For example:

Test Suite

TSC

The report will be named with the following naming convention: YYYYMMDD_HHmmss, which is the datetime when the test suite/ test suite collection starts its execution.

Export reports to other formats

For the purpose of sharing, users can export reports of test suites into other formats such as HTML, CSV, PDF and JUnit.

For Katalon Studio version 6.1.5- 6.3.3, please install Basic Report plugin to use this feature.

Starting in version 7.0, Katalon Studio automatically generates junit report for both Test Suite and Test Suite Collection.

Automatically generate reports

In Project > Settings > Plugins > Report, select the formats of reports that will be automatically generated after each Test Suite execution.

Execute a Test Suite and observe the Log Viewer after the test execution completes. The generated reports will be the same as the settings you've configured above.

You can view the generated reports in <project_folder>\Reports\<execution_folder> after the test execution finishes.

Manually export reports

Open the Result view of a Test Suite or a Test Suite Collection > on the top right corner, select Export report > choose a format to export.

For Test Suite Collection, you can export to HTML format only.

Get generated reports location at runtime

To retrieve current generated reports location, you can use the sample code below:

import com.kms.katalon.core.configuration.RunConfiguration
RunConfiguration.getReportFolder()

You can also retrieve other information through the RunConfiguartion package, please refer to this documentation: RunConfiguration

Video Capturing

  • K-Lite Codec is recommended to play the Katalon Studio test execution video.
  • Support execution at Test Suite level.
  • Support all browsers except for Remote, Headless, Kobiton, Custom. For remote or headless browsers, it's recommended to use Katalium Server to view captured sessions.
  • Recording parallel execution is NOT supported yet

Debugging can be time-consuming and challenging for many automation testers. Katalon Studio helps solve this problem by supporting users with the ability to capture test execution via video format. Users can enable video capturing feature in Project Settings.

Follow the steps below to see how to work with Katalon Studio video capturing feature

  1. After creating a test suite in Katalon Studio, select Project > Settings to open the Project Settings dialog box. Navigate to the Report section.

  2. Check the "Enable Video Recorder during execution" option. By default, Katalon Studio only captures Failed test cases. However, users can select options to capture only the Passed/Failed test cases or both.

    Video settings can be specified based on the preferences of users. Katalon Studio recommends AVI (.avi) format and low quality to save disk space. The higher the video quality is, the bigger the file size is.

  • Video format: AVI (.avi) or MOV (.mov)

  • Video quality: Low; Medium or High

  1. After executing the test suite, navigate to the Result tab, you can view the list of test cases in the test cases table with its video attached accordingly. Click on the play icon in the 'Video' column to play the video. Test steps descriptions are embedded as a subtitle. For example, please take a look at below screenshot

By watching how the automated test was executed, the testing team can identify exactly where the test failed. Thus, time and resources are managed more efficiently and effectively.

Feedback