agents
latest
false
  • Getting started
    • About this guide
    • About agents
    • Agent capabilities in the UiPath Platform™
  • UiPath Agents in Studio Web
  • UiPath Agents in Agent Builder
  • UiPath Coded agents
UiPath logo, featuring letters U and I in white

Agents user guide

Last updated Jul 29, 2025

Configuring simulations for agent tools

Note: This feature is available in preview.

Tool simulations enable agents to test behavior using synthetic input and output instead of real API calls, supporting safe, fast, and reproducible development. Defined at design-time, simulations use example JSON inputs/outputs to mimic realistic interactions while preserving guardrails and execution constraints.

How to set up tool simulations

When adding a tool to your agent, you can choose how the agent interacts with it:
  • Simulate the tool call – The agent uses synthetic input/output generated by the simulation engine (no real API calls).
  • Call the real tool – The agent makes live API requests as configured.

This setting is defined at design-time and persists automatically into evaluation-time runs.

To configure simulation settings for a tool:

  1. Add the tool to your agent definition.
  2. Select the tool node in the agent canvas.
  3. In the Properties panel, locate the Simulation settings section.
  4. Enter example input and output values in JSON format. These examples help the simulation LLM mimic realistic tool interactions by conforming to your tool's expected schema and logic.
  5. (Optional) Add multiple examples to improve simulation variety. Providing examples helps improve the accuracy and relevance of simulated outputs during test runs.

You can also set up simulations when configuring evaluations. For details, refer to Configuring simulations in evaluations.

Figure 1. Adding an example input for a tool simulation

Guardrails and execution behavior in simulation mode

Simulated tool runs keep existing tool guardrails and execution constraints, just like real runs. This ensures:

  • Simulated behavior reflects real-world safety conditions.
  • Agents cannot bypass policy enforcement or invocation restrictions during testing.

Simulation runs are transparently labeled and easy to inspect, with the following built-in behaviors:

  • Trace labeling: Each simulation is clearly marked in both streaming execution view and persistent trace logs.
  • Unique metadata: Every simulated trace includes metadata that identifies it as simulated, enabling:
    • Easy debugging

    • Reliable filtering
    • Accurate run classification

This ensures you can differentiate simulated vs. real behavior, trace agent logic, and reproduce results consistently.

You can easily add simulated runs to evaluation sets, helping expand test coverage without calling real endpoints. Key characteristics of this feature include:

  • Simulated runs are clearly marked as simulated within evaluation sets.

  • Their corresponding traces retain the simulation tag throughout.

  • Evaluation sets can contain a mix of real and simulated runs, supporting comparative testing and broader scenario validation.

Test configuration with simulations

You can also configure simulations when initiating test runs using the Test configuration panel.

To enable simulations in test configuration:
  1. Finalize your agent definition and evaluation setup.
  2. Select the Test on cloud dropdown.
  3. Select Test configuration.
  4. In the Test configuration panel, select the Project arguments tab. Here you can:
    • Generate input data for your test run.
    • Enable simulation mode for selected tools and/or escalations.
    • Provide input/output examples or simulation instructions that define how the tools should behave.
  5. Start the test to simulate the agent trajectory using the provided configuration.
Figure 2. Configuring tool simulations in the Test configuration window

Regardless of where simulations are initiated (evaluations or test configurations), all simulated runs:
  • Are clearly labeled in streaming and persistent trace logs.
  • Retain metadata to distinguish them from real runs.
  • Keep the same tool guardrails and validation logic as real tool executions.
  • Are fully traceable, helping you debug with confidence and avoid test confusion.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo
Trust and Security
© 2005-2025 UiPath. All rights reserved.