- Getting Started with Test Suite
- Overview
- Test Automation Best Practices
- Studio
- Orchestrator
- Testing robots
- Test Manager
Test Automation Best Practices
To make the most out of your testing efforts, consider the following test automation best practices, based on application or RPA testing.
- Test cases should be independent of each other. One test case should not depend on another test case’s run.
- A test case should have one specific purpose only. Each test workflow should contain only one verification.
- Every feature should have a unit test. If exceptions are allowed, create a separate test for each test case.
- In a Given-When-Then test case structure, if the Given part is getting too extensive and unmanagable, try to redefine the test case. It could need more granularity or refactoring.
- Maintain the test cases and update them after any change request.
- Consider establishing a Test Management logic to have a single way of defining test cases.
- To increase reusability between individual test projects, as well as between test and RPA projects, try to use libraries and object repository, whenever possible.
- Include the tests in the CI/CD pipeline.
- Functional tests as part of your CI pipeline should be run as quickly as possible so as to not delay your build. Therefore, try to execute these tests in parallel on as many robots as possible.
- Activity names should reflect the action taken. For non-obvious behaviors, consider using annotations on your activities.
- Consider using detailed logging and exception handling to debug the process and avoid false negative results.
- Plan for recovery or retry for errors at different stages to avoid failed results.
- Consider having a folder structure dedicated to testing and using the same test case naming convention across your projects.
- Use assets for variables that are likely to change and be used many times.
- For scenarios where an application’s state must be validated before proceeding with certain steps in a process, consider applying validation measures. These measures can include using extra activities that wait for the desired application state before other interactions (hardcoded delays are not considered good practice).
- Consider using simulate click/ type or send windows messages, whenever possible.
- Do not delete, move or rename the test cases outside of Studio. Perform these actions in Studio only. Use Import Test Cases in case there is a test cases from another project that should be referenced.
- Test cases should be independent of each other. One test case should not depend on another test case’s run.
- Consider creating small workflows, that tackle the smallest number of actions possible. In this way, it will be easier to understand it and perform unit testing.
- A test case should have one specific purpose only. Each test workflow should contain only one verification.
- Every feature should have a unit test. If exceptions are allowed, create a separate test for each test case.
- To increase reusability between individual test projects, as well as between test and RPA projects, try to use libraries and object repository, whenever possible.
- In a Given-When-Then test case structure, if the Given part is getting too extensive and unmanagable, try to redefine the test case. It could need more granularity or refactoring. Modularity is the key to good unit testing. Writing tests could act as feedback/code review on the development.
- Use Mocking whenever there are complex steps that are irrelevant to the test case purpose that can be replaced.
- Consider establishing a Test Management logic to have a single way of defining test cases.
- Maintain the test cases and update them after any change request.
- Include the tests in the CI/CD pipeline.
- Run your test cases whenever you commit a change to your RPA to make sure you did not introduce a bug.
- Prepare an RPA test set that can be run by IT in a pre-production environment, whenever they plan to roll-out an environment change (like a windows update) so that you can catch potential issues before they hit production.
- Activity names should reflect the action taken. For non-obvious behaviors, consider using annotations on your activities.
- Plan for recovery or retry for errors at different stages to avoid failed results.
- Consider having a folder structure dedicated to testing and using the same test case naming convention across your projects.
- Use assets for variables that are likely to change and be used many times.
- For scenarios where an application’s state must be validated before proceeding with certain steps in a process, consider applying validation measures. These measures can include using extra activities that wait for the desired application state before other interactions (hardcoded delays are not considered good practice)..
- Consider using simulate click/ type or send windows messages, whenever possible.
- Do not delete, move or rename the test cases outside of Studio. Perform these actions in Studio only. Use Import Test Cases in case there is a test cases from another project that should be referenced.
In the following topic, you can learn how to perform RPA testing of your attended automations based on specific testing scenarios.
Who performs the testing |
Testing procedure |
Capturing test results |
License requirements |
---|---|---|---|
• Manual testers • UAT or business SMEs |
|
The results, including screenshots and documents, are available to the UAT tester in Test Manager. |
The UAT tester requires an Attended Automation license. You can manage your licenses based on your deployment type. You can work this out based on two scenarios: • Publish the automation packages to your Orchestrator production environment that has an allocated Attended Automation license for the UAT tester. • Allocate a subset of Attended User licenses to your Orchestrator non-production environment. |
Who performs the testing |
Testing procedure |
Capturing test results |
License requirements |
---|---|---|---|
| Testing runtimes |