- Getting started
- UiPath Agents in Studio Web
- UiPath Agents in Agent Builder
- UiPath Coded agents

Agents user guide
Configuring simulations for agent tools
Tool simulations enable agents to test behavior using synthetic input and output instead of real API calls, supporting safe, fast, and reproducible development. Defined at design-time, simulations use example JSON inputs/outputs to mimic realistic interactions while preserving guardrails and execution constraints.
- Simulate the tool call – The agent uses synthetic input/output generated by the simulation engine (no real API calls).
- Call the real tool – The agent makes live API requests as configured.
This setting is defined at design-time and persists automatically into evaluation-time runs.
To configure simulation settings for a tool:
- Add the tool to your agent definition.
- Select the tool node in the agent canvas.
- In the Properties panel, locate the Simulation settings section.
- Enter example input and output values in JSON format. These examples help the simulation LLM mimic realistic tool interactions by conforming to your tool's expected schema and logic.
- (Optional) Add multiple examples to improve simulation variety. Providing examples helps improve the accuracy and relevance of simulated outputs during test runs.
You can also set up simulations when configuring evaluations. For details, refer to Configuring simulations in evaluations.
Simulated tool runs keep existing tool guardrails and execution constraints, just like real runs. This ensures:
- Simulated behavior reflects real-world safety conditions.
- Agents cannot bypass policy enforcement or invocation restrictions during testing.
Simulation runs are transparently labeled and easy to inspect, with the following built-in behaviors:
- Trace labeling: Each simulation is clearly marked in both streaming execution view and persistent trace logs.
- Unique metadata: Every simulated
trace includes metadata that identifies it as simulated, enabling:
-
Easy debugging
- Reliable filtering
- Accurate run classification
-
This ensures you can differentiate simulated vs. real behavior, trace agent logic, and reproduce results consistently.
You can easily add simulated runs to evaluation sets, helping expand test coverage without calling real endpoints. Key characteristics of this feature include:
-
Simulated runs are clearly marked as simulated within evaluation sets.
-
Their corresponding traces retain the simulation tag throughout.
-
Evaluation sets can contain a mix of real and simulated runs, supporting comparative testing and broader scenario validation.
You can also configure simulations when initiating test runs using the Test configuration panel.
- Finalize your agent definition and evaluation setup.
- Select the Test on cloud dropdown.
- Select Test configuration.
- In the Test
configuration panel, select the Project arguments tab. Here
you can:
- Generate input data for your test run.
- Enable simulation mode for selected tools and/or escalations.
- Provide input/output examples or simulation instructions that define how the tools should behave.
- Start the test to simulate the agent trajectory using the provided configuration.
- Are clearly labeled in streaming and persistent trace logs.
- Retain metadata to distinguish them from real runs.
- Keep the same tool guardrails and validation logic as real tool executions.
- Are fully traceable, helping you debug with confidence and avoid test confusion.