StudioAssist Preferences
- Katalon Studio Enterprise (KSE) 10.2.0.
You can configure to use StudioAssist to further improve your experience.
In Katalon Studio, go to Katalon Studio Enterprise > Settings > Katalon and select StudioAssist. Depending on your Account configuration, you will see a slight difference in the dialog.
AI service configuration
When AI services and Katalon AI are enabled for your Account, you can freely choose between using:
AI Provider | Model used (default) | Configuration Details |
---|---|---|
Katalon AI Service | gpt-4.1-mini | Built-in; no configuration needed. |
Personal OpenAI | gpt-4o-mini | Selectable via KSE configuration window. |
Azure OpenAI | User-specified deployment | Requires specifying the deployment name in configuration. |
Gemini | gemini-2.5-flash | URL points to the latest supported version of the Google Generative Language API. |
OpenAI-Compatible Provider | gpt-4.1-mini | API key passed via the Authorization HTTP header. |

Click on the tab below to find more information about the AI service.
- Katalon AI
- Preset AI Service
- Personal AI keys
When selected, there is no need to to set up a key for Katalon AI Service.
You can, however, configure StudioAssist to auto-tag AI-generated test cases (e.g., API test cases or code generation) with default or custom tags, include project context such as the Object Repository and Custom Keywords, and enable follow-up question suggestions in the chat for a more guided AI experience.

If your Admin has enabled the use of the organization’s AI key, the AI provider is shown at the top of the dialog as: "AI provider name – managed by your organization". For example:

With this setting, you can configure the token limit and model for the AI service. You cannot switch between services (Katalon AI, OpenAI, Azure, Gemini, or OpenAI compatible provider).
In the case of AI features are disabled, you can opt for your personal OpenAI key, Azure OpenAI API key, Gemini API key, or your personal OpenAPI key from a compatible provider.
- Provide service provider configuration.
-
Use personal OpenAI key: Provide the following information before using:
- Secret key: To get your Secret key, refer to the provider's instruction: Where do I find my secret key?.
- Max completion token: The default value that sets the maximum number of tokens the model can return in its response. The default value is 16000. To learn more about the token limits, refer to the OpenAI rate limits documentation: OpenAI Token Limits.
- Organization ID (optional): The organization ID on OpenAI is the unique identifier for your organization which can be used in API requests.
-
- Model: The OpenAI model you want to use. If not changed, the
gpt-4o-mini
model is used by default.
- Model: The OpenAI model you want to use. If not changed, the
-
Use personal Azure OpenAI API key: Provide the following information before using:
- Base URL: The base URL for your Azure OpenAI resource in the following format:
https://{your-resource-name}.openai.azure.com
. - Deployment name: Azure OpenAI uses the deployment name to call the model. Enter the deployment name of your choosing, make sure that the model supports chat completion.
- API key: To get your Azure OpenAI key, refer to this article: How to get Azure OpenAI Keys and Endpoint
- Max completion token: The default value that sets the maximum number of tokens the model can return in its response. The default value is 16000.
- API version: API version is selected for you by default.
- Base URL: The base URL for your Azure OpenAI resource in the following format:
-
Use Gemini API key: Provide the following information before using:
- Base URL: The base URL used to connect to the Gemini API service. This URL should point to the correct version of the Google Generative Language API.
- API key: Your Gemini key. Create a key for free in Google AI Studio.
- Model: The Gemini model you want to use. If not, StudioAssist will use the latest supported version of the Google Generative Language API by default.
- Max completion token: The default value that sets the maximum number of tokens the model can return in its response.
-
Open-AI compatible provider: Provide the following information before using:
- Base URL: The The API endpoint for your OpenAI-compatible service.
- API key: Your API key.
- API key header name: The name of the HTTP header where the API key is passed (commonly
Authorization
). This allows support for providers with different header naming conventions. - Model: The model you want to use. If not changed, the
gpt-4.1-mini
model is used by default. - Max completion token: The default value that sets the maximum number of tokens the model can return in its response.
-
StudioAssist Preferences options
Refer to the items below for the list and descriptions of StudioAssist Preferences options.
Append tags for test cases used AI generated capabilities
-
API Test Case Generation - Check this option to automatically tag AI-generated API test cases with a default tag (
API_Test_Generation
) or custom tag name of your choice. When enabled, StudioAssist adds an AI tag (default or custom) to each API test case it generates. -
StudioAssist Code Generation - Check this option to automatically tag AI-generated test automation scripts from structured user prompts with a default tag (
GenAI
) or custom tag name of your choice.
Test cases with AI-generated tags are highlighted in purple.
Auto-include project context information
To improve the generated scripts from StudioAssist, you can enable the Object Repository information. This feature automatically uses the list of all test object IDs in that project as context.
This helps StudioAssist provide more tailored responses, while reduces the effort of having to provide the exact object paths or test objects you want to include in your test.
Auto-suggest follow up questions in the chat
When enabled, StudioAssist automatically suggests follow-up questions after providing a successful answer. This gives users greater control over their chat experience, whether they prefer guided suggestions or a more minimal interface.

Customize engineering prompts with Prompt Library
Starting from version 10.2.3, you can customize your engineering prompts using Prompt Library to provide more context and improve the accuracy of StudioAssist responses.
- Click Katalon Studio Enterprise on the main navigation and select Settings to open the Preferences dialog.
- Select Katalon > StudioAssist > Prompt Library.
- Configure your Prompt Library. Click on the prompt type you want to customize. Edit this text directly to include more context or specific instructions about how you want the AI to respond.

- Chat instruction: Used in the StudioAssist chat window. This controls how StudioAssist responds to your general questions or guidance requests. Add more context about your application under test (AUT) or focus area, so you don’t have to repeat this every time you chat.
- Generate code: Used in the script editor, add style guide details or coding preferences to make the generated code better fit your project.
- Explain code: Used in the script editor, specify if you want detailed technical explanations or a high-level summary, depending on your needs.
- Use
${userSelection}
for Generate code and Explain code. This variable represents the specific piece of text or code you have highlighted (selected) in the script editor. - Customize prompt is not applied for Katalon AI service.
- Click Apply or Apply and Close to save.
You can now use StudioAssist with your customized prompts.
If the generated output is not to your expectation, simply open the Prompt Library and select Revert Original to revert each prompt or click Restore to Defaults to restore all prompts.