- Getting started
- Project management
- Documents
- Working with Change Impact Analysis
- Searching with Autopilot
- Project operations and utilities
- Test Manager settings
- ALM tool integration
- API integration
- Troubleshooting

Test Manager user guide
AutopilotTM for Testers best practices
linkAutopilotTM for Testers helps automate and enhance tasks across your testing lifecycle. Follow these best practices to improve accuracy and usability.
Requirement evaluation
linkFor evaluating requirements you need to:
-
Write clear, complete requirements with measurable acceptance criteria.
-
Use focused evaluations (for example: security or performance).
-
Attach supporting files like guidelines or specs.
-
Use the Prompt Library to standardize evaluations.
Manual test generation
linkFor generating manual tests you need to:
-
Ensure requirements include full user flows and expected outcomes.
-
Add supporting files (mockups, diagrams) for context.
-
Tailor instructions for specific scenarios.
-
Reuse prompts from the Prompt Library.
Converting text into code
linkFor converting text into code you need to:
-
Specify the language and goal clearly (e.g., “Refactor this C# method”).
-
Keep prompts short and direct.
Automating manual tests
linkFor automating manual tests you need to:
-
Use a consistent object repository and naming conventions.
-
Write manual steps using terms that match UiPath activity names.
Generating test data
linkFor generating test data you need to:
-
Use prompts to define value ranges, patterns, or combinations.
-
Reference existing arguments in your workflows.
Analyzing test results
linkFor analyzing test results you need to:
-
Run insights on a large volume of test results for better accuracy.
-
Focus on:
-
Common Errors: Frequent failure clusters.
-
Error Patterns: Recurring test failure themes.
-
Recommendations: Actionable fixes to stabilize tests.
-