Skip to main content

Generate API tests with Katalon Studio's AI

This document introduces AI-powered API test generation and report viewing. By importing an OpenAPI file, you can have Katalon's AI agent automatically generate and execute tests, then view a results report without any test artifacts being created or stored in the project.

API test generation and report viewing with AI​

By uploading an OpenAPI specification file, Katalon’s AI agent automatically generates tests, executes them immediately, and provides a results report—without creating or storing any test artifacts in your project.

  1. Import your OpenAPI spec file.

    Import your OpenAPI document to create an API Collection folder in the Object Repository.

    This feature currently supports header-based authorization only. If your project uses other authentication types (NTLM, Digest, etc.), those mechanisms will not be included in the generate commands.

  2. Double-click on this API collection folder, and select Generate Test tab.

  3. Select test types (positive, negative, edge and security testing), API paths, and HTTP methods to test. The options are retrieved from your specification file. If you don't select any API path before generating, KS automatically use all paths to generate the tests.

api generated test from openapi file interface
  1. Click Generate Test. Katalon Studio will generate tests using AI for the OpenAPI Spec file, and then execute the tests right after. You can see the progress bar, and click Hide to let KS work in the background.
ai is generating and executing api test
  1. Once done, the report summary pops up for your viewing:
api report inside katalon studio

You can then also view the full HTML report, with more details on scores, compliance analysis, errors, logs...

api html report with details

After reviewing everything, you can save the test generation configuration to rerun with same configurations later:

ai generated api test save configuration feature

This is useful for validating fixes after API updates and comparing results across multiple iterations without reconfiguring the test setup each time.

ai generated api test rerun result
Was this page helpful?