- Getting started
- Project management
- Documents
- Working with Change Impact Analysis
- Importing Orchestrator test sets
- Creating test sets
- Adding test cases to a test set
- Assigning default users in test set execution
- Enabling activity coverage
- Enabling Healing Agent
- Configuring test sets for specific execution folders and robots
- Overriding parameters
- Cloning test sets
- Exporting test sets
- Applying filters and views
- Creating automated tests
- Executing performance scenarios
- Known limitations for performance testing
- Best practices for performance testing
- Troubleshooting performance testing
- Accessibility testing for Test Cloud
- Searching with Autopilot
- Project operations and utilities
- Test Manager settings
- ALM tool integration
- API integration
- Troubleshooting

Test Manager user guide
Ensure test cases are robust, data-stable, and free from flakiness before scaling.
Use gradual ramp-up to simulate realistic traffic and avoid unrealistically long or short peak phases.
Prepare parametrized datasets (via Data Fabric) to avoid duplicate inputs that could skew results.
- Ensure there are sufficient infrastructure resources.
- Validate tests locally before publishing.
- Use the latest package versions.
The following recommendations expand on the general best practices for performance testing, specifically for browser automation scenarios. Performance testing requires running multiple virtual users (VUs) simultaneously on the same machine, which introduces constraints that do not exist in single-user execution.
- Avoid local file system operations.
Any step that reads from or writes to a local file relies on Windows file handles. Windows locks a file handle when it is opened by one process, blocking all other processes from accessing the same file. This causes failures when multiple VUs run concurrently.
Common examples:
- Reading test data from Excel or CSV — Multiple VUs cannot open the same file simultaneously. Use Data Fabric instead to serve test data concurrently without file handle contention.
- Writing screenshots into Word documents — Evidence capture is typically only relevant for functional testing and should not be part of a performance test. In a load test, hundreds of VUs run in loops — each iteration would generate its own documents, quickly producing an unmanageable volume of artifacts.
- Any other local file interaction — Configuration files, log files, intermediate data stores — all are subject to file handle locking.
- Prefer Chromium API - Avoid Hardware Events and Computer Vision.
Approach Description Multi-VU Compatibility Chromium API (simulated DOM events) Triggers events directly on DOM elements within the browser instance via selectors. No real input occurs at OS level — the browser handles the interaction internally. Excellent — independent per instance; works on background windows. Hardware Events Generates real mouse/key input at OS level. The OS delivers it to the currently active (foreground) window. Poor — input goes to whichever window is in the foreground, not necessarily the intended browser instance. Computer Vision Locates elements by visual pattern matching on screen. Not viable — background browser instances are invisible to image recognition. - Always default to Chromium API.
Chromium API operates directly on the DOM and works regardless of whether the browser window is in the foreground, minimized, or hidden.
- Avoid Hardware Events.
Hardware events are delivered by the OS to the active window. With multiple VUs, a hardware event intended for one browser instance will be sent to whichever window is currently in the foreground. Hardware events are not suited when multiple automations run in parallel on the same machine.
- Avoid Computer Vision.
Computer Vision cannot interact with background windows and is therefore incompatible with multi-VU execution. Incorrect or ambiguous selectors are the most common reason a framework falls back to computer vision. Ensure selectors are correct and validated during development. Strict selectors (IDs, data-test attributes, ARIA roles) work well, particularly for applications like SAP Fiori. Fuzzy selectors are also compatible with performance testing as long as they reliably match the intended element without triggering a computer vision fallback.
- Dry Run recommendation
When you add a test case to a performance scenario for the first time, the tool prompts you to perform a dry run. A dry run runs multiple instances on the same machine to confirm there are no file locks, input conflicts, or selector issues before scaling to full load.