- Getting started
- Project management
- Documents
- Working with Change Impact Analysis
- Create test cases
- Assigning test cases to requirements
- Cloning test cases
- Exporting test cases
- Linking test cases in Studio to Test Manager
- Delete test cases
- Manual test cases
- Importing manual test cases
- Document test cases with Task Capture
- Parameters
- Enabling governance at project level
- Disabling governance at project level
- Enabling governance at test-case level
- Disabling governance at test-case level
- Managing approvers for governed test cases
- Managing governed test cases in the In Work state
- Managing governeed test cases in the In Review state
- Managing governed objects in the Signed state
- Managing comments for governed test cases
- Applying filters and views
- Importing Orchestrator test sets
- Creating test sets
- Adding test cases to a test set
- Assigning default users in test set execution
- Enabling activity coverage
- Enabling Healing Agent
- Configuring test sets for specific execution folders and robots
- Overriding parameters
- Cloning test sets
- Exporting test sets
- Applying filters and views
- Accessibility testing for Test Cloud
- Searching with Autopilot
- Project operations and utilities
- Test Manager settings
- ALM tool integration
- API integration
- Troubleshooting
Test Manager user guide
-
Log in to Test Manager.
-
Open a project.
-
Perform dry run. Open a performance scenario and select Dry run.
Tip:A dry run executes each load group with a single robot to validate automation stability or detect infrastructure misconfigurations. The dry run calculates the required resources before full execution.
-
Run a full execution. Open a performance scenario for which you already performed a dry run. Select Full execution. The execution screen. is opened automatically.
-
Monitor the dashboard in real time and check the execution status. The progress bar displays four sequential phases.

- Loading test configuration - The system validates the scenario setup and loads the configuration details (test cases, load groups, thresholds, and data sources).
- Provisioning resources - The required execution resources are allocated.
- For cloud robots, this means provisioning serverless robots and consuming Platform Units.
- For on-premises robots, this means the correct machines and runtimes are available.
- Preparing virtual users - Virtual users are initialized based on the defined load group settings, which includes connecting robots, assigning test cases, and preparing the execution environment.
- Full execution - The actual performance test runs according to the configured load profile (ramp-up, peak, ramp-down). Real-time monitoring of metrics (response times, error rates, infrastructure usage) becomes available at this stage.
-
Consult the execution overview. The dashboard shows the summary of a performance test execution.

- Load groups: Active load groups currently executing in parallel.
- Virtual users: Currently active virtual users for the entire scenario.
- Errors: Errors occurred during the run until now (
HTTP, automation errors) over all groups. - Average response time: Average and maximum detected response time response across all groups.
- Graph: Load profile with a visual representation of the progress.
-
Consult the metrics. The histogram represents the overall average response time for the currently selected load group. You can resize and move the highlighted bar to zoom into a specific time range. Several charts are also provided.

- The Load Profile chart section shows how many virtual users were active at a given time. This reflects the configured ramp-up, peak, and ramp-down phases.
- The HTTP Response Time (ms) chart section tracks the average response time of
HTTPrequests over the selected period. Compare against thresholds (e.g., 1,000 ms) to see where performance degrades. - The HTTP Errors chart section displays the percentage of
HTTP-level errors (e.g., 404, 503). This helps identify if server or network issues are causing instability. - The Automation Step Duration (ms) chart section measures how long individual automation steps take to execute. Spikes may indicate inefficiencies or issues in the automation design.
- The Automation Errors (%) chart section shows the percentage of automation-level errors (e.g., failed selectors, exceptions). This helps differentiate system errors from automation issues.
- The Infrastructure – Executing Robots CPU (%) chart section monitors CPU usage of the robots executing the load. High or sustained CPU usage can indicate a resource bottleneck.
- The Infrastructure – Executing Robots Memory (%) chart section tracks memory consumption of executing robots. This is useful for spotting memory leaks or excessive usage over time.
- Use percentile metrics such as P50, P90, or P95 are shown to help you understand the distribution of response times and identify outliers that may impact user experience. These are available for metrics like: HTTP response time, HTTP Errors, Automation Step Duration, Automation Errors.
-
Monitor issues during execution. Check the application log and the severity levels, on the right side of the execution screen. For API performance testing, you can consult the execution progress and results in graph format and in metric format. The following information is displayed: API levels, average, minimum, maximum.

- Info – general information, such as resource allocation
- Warning – threshold breaches or potential risk conditions
- Error – automation or
HTTPfailures (e.g., request timeouts, selector errors) - Fatal – severe execution failures that prevent the test from continuing