Testing Frameworks Integration
This documentation page assumes the two preliminary steps of scan and runtime instrumentation for the application under test are already completed.
When your integration is successful, you’ll see
In Coverage Dashboard, the Test Stage name is associated with your application entry, detailing coverage and test results (passed, failed, skipped)
In the Test Optimization pages, for Supported Frameworks, you can see the potential and/or actual savings of your tests (including details on the recommendation reasons) for this specific application/branch/test stage.
Integration for Supported Testing Frameworks
Built-in Integrations with other Tricentis Solutions
Framework name | Documentation link | Test Optimization Support |
---|---|---|
Tricentis Tosca | (Starting from upcoming Tosca 2025.1) |
|
Tricentis Testim |
|
Integrations with the most common Testing Frameworks
This document provides a tabulated summary of the integration support offered by SeaLights for various testing frameworks, categorized by the underlying technology and linked to their respective documentation pages or sections. For more detailed guidance on these framework integration processes, follow the documentation links provided in the table or refer to related sections within the SeaLights support documentation.
Framework name | Underlying Technology | Documentation link | Test Optimization |
---|---|---|---|
Cucumber JS | JavaScript |
| |
Cypress | JavaScript |
| |
Playwright JS | JavaScript |
| |
TestCafe | JavaScript |
| |
Jest | JavaScript |
| |
Mocha | JavaScript |
| |
Karma | JavaScript |
| |
AVA | JavaScript |
| |
Protactor | JavaScript | ||
Robot | Python |
| |
Pytest | Python | Running Tests with unittest, unittest2, pytest, nose or behave | Running tests with pytest |
|
Behave | Python | Running Tests with unittest, unittest2, pytest, nose or behave | Running tests with behave |
|
Cucumber Java | Java |
|
|
JUnit, TestNG | Java |
|
|
SoapUI | Java |
| |
JMeter | Java |
| |
xUnit | .NET |
| |
nUnit | .NET |
| |
MSTest, VSTest | .NET |
| |
Go Test | Go |
| |
Ginkgo | Go |
| |
Godog, Testify, GoConvey | Go |
|
Integrations maintained by Third-Parties
Framework name | Underlying Technology | Documentation link | Test Optimization |
---|---|---|---|
Katalon Studio | Java |
| |
Ginger by Amdocs | .NET |
|
Generic Integration for Unsupported Framework
Sealights Public API
The SeaLights Public API provides integration capabilities with unsupported testing frameworks by allowing users to manage and optimize their test executions. Below are the detailed steps and API calls required for integrating Test Impact Analysis (TIA) using the SeaLights Public API:
Creating a Test Session: To begin a test session, you need to create a session using the dedicated API call.
Getting Test Recommendations: To optimize the testing process, you can retrieve a list of recommended tests to run and use them to dynamically reduce the list of tests executed
Run Your Tests based on Recommendations: Assuming your application under test is properly instrumented with Sealights
Reporting Test Events: Once tests are complete, report the test results with the timestamps for each test of its start and end time.
Closing a Test Session: Finally, close the test session (delete request) complete the testing cycle, and trigger the processing of all the related metadata (coverage and test results).
All the relevant details of the API calls are described in our dedicated section of the portal: Tricentis SeaLights Public API
Agents' legacy commands
When using the legacy approach of a SeaLights Agent to run your tests, you can report and capture coverage and test results data effectively for your applications under test. The following steps are to follow:
Start Session: This command initializes a Test Stage Run for your build, which will be used to link all related test and coverage data. It's critical for associating results from different test stages.
Run Your Tests: Assuming your application under test is properly instrumented with Sealights
Upload Results: After executing your tests, you need to upload the test results (success, failed, skipped).
End Session: Conclude the session to finalize the data collection. This ensures that the session is properly closed and all related data (coverage and test results) is processed.
This approach doesn’t support Sealights Test Optimization and will not allow you to skip irrelevant tests in a seamless way. If this is necessary, please refer to the Public API section.
Technology | Documentation link |
---|---|
Java | If you're using Maven or Gradle to trigger a testing framework that isn't directly supported by us but uses a supported test runner (like JUnit or TestNG) under the hood, please refer to the relevant integration guides above or contact Support for assistance. |
Javascript | |
.Net | If you're using a testing framework that isn't directly supported by us but relies on a supported test runner (like xUnit, NUnit, or MSTest) under the hood, please refer to the relevant integration guides above or contact Support for assistance. |
Python | |
Go |