- Getting started
- Project management
- Documents
- Working with Change Impact Analysis
- Create test cases
- Assigning test cases to requirements
- Cloning test cases
- Exporting test cases
- Linking test cases in Studio to Test Manager
- Delete test cases
- Manual test cases
- Importing manual test cases
- Document test cases with Task Capture
- Parameters
- Enabling governance at project level
- Disabling governance at project level
- Enabling governance at test-case level
- Disabling governance at test-case level
- Managing approvers for governed test cases
- Managing governed test cases in the In Work state
- Managing governeed test cases in the In Review state
- Managing governed objects in the Signed state
- Managing comments for governed test cases
- Applying filters and views
- Importing Orchestrator test sets
- Creating test sets
- Adding test cases to a test set
- Assigning default users in test set execution
- Enabling activity coverage
- Enabling Healing Agent
- Configuring test sets for specific execution folders and robots
- Overriding parameters
- Cloning test sets
- Exporting test sets
- Applying filters and views
- Accessibility testing for Test Cloud
- Searching with Autopilot
- Project operations and utilities
- Test Manager settings
- ALM tool integration
- API integration
- Troubleshooting

Test Manager user guide
For evaluating requirements you need to:
-
Write clear, complete requirements with measurable acceptance criteria.
-
Use focused evaluations (for example: security or performance).
-
Attach supporting files like guidelines or specs.
-
Use the Prompt Library to standardize evaluations.
For generating manual tests you need to:
-
Ensure requirements include full user flows and expected outcomes.
-
Add supporting files (mockups, diagrams) for context.
-
Tailor instructions for specific scenarios.
-
Reuse prompts from the Prompt Library.
For converting text into code you need to:
-
Specify the language and goal clearly (e.g., “Refactor this C# method”).
-
Keep prompts short and direct.
For automating manual tests you need to:
-
Use a consistent object repository and naming conventions.
-
Write manual steps using terms that match UiPath activity names.
For generating test data you need to:
-
Use prompts to define value ranges, patterns, or combinations.
-
Reference existing arguments in your workflows.