agents
latest
false
  • Getting started
    • About agents
    • Agents workspaces
    • Governance
    • Data residency and supported models
    • Limitations
  • Prerequisites
  • Building agents in Studio Web
    • Building an agent in Studio Web
    • Best practices
    • Prompts
    • Contexts
    • Escalations and Agent Memory
    • Evaluations
    • Traces
    • Agent score – Preview
  • Building agents in Agent Builder
UiPath logo, featuring letters U and I in white

Agents user guide

Last updated Apr 30, 2025

Best practices

This section explains how to design a robust agent that has all of the necessary control, action, and governance required to automate a business process.

Start from existing workflows

Developing effective LLM agents requires a strategic approach that builds upon existing organizational workflows. Start by conducting a thorough audit of current automation processes, to identify repetitive, rule-based tasks that are prime candidates for agentic transformation. This approach minimizes risk and allows for gradual skill-building in agent design.

The initial phase of agent building involves a detailed workflow mapping, where you document each step of existing processes. Take note of decision points, input dependencies, and expected outputs. Look for workflows with clear, structured steps and well-defined success criteria. Examples might include customer support ticket routing, preliminary sales qualification, or standard compliance checking.

The agent design should be specifically focused on the set of tasks in existing automations. That way, the agent can easily be plugged into the agentic activity to run within that workflow.

Look for processes with:

  • Repetitive, rule-based decision points
  • Clear input and output parameters
  • Predictable task sequences

Core principle: Agents are not workflows

  • Consider agents as components of a workflow, not as a replacement for structured automation.
  • Overloading an agent with too many responsibilities leads to reduced accuracy, increased complexity, and maintainability issues.
  • The best practice is to minimize agent responsibilities by focusing on decision-making tasks rather than multi-step processing.

Decision framework: agent versus workflow

Use the following decision matrix to determine if a function should be handled by an agent or a workflow:
Table 1. When to use an agent versus when to use a workflow
CriteriaAgentWorkflow
Decision-making (route, classify, summarize)yesno
Structured data processing (extracting values from a contract)noyes
Multi-step automation (UI interactions, API calls)noyes
Unstructured data reasoning (interpreting ambiguous user input)yesno
Repetitive, rule-based actions (data validation, transformation)noyes
Domain-specific knowledge application yesno

Use English to build and train your agent

Use English as the standard language for building and training agents. LLMs may struggle with non-English characters—for example, Japanese and Chinese—when executing actions such as function calling. Additional language support depends on improvements in underlying LLMs.

Build small, specialized agentic tasks

By starting small, you create a controlled environment for learning agent behavior, understanding prompt engineering nuances, and establishing evaluation frameworks.

By breaking complex workflows into small, specialized agent tasks, you can:

  • Create plug-and-play agent configurations.
  • Rapidly adapt to changing business requirements.
  • Minimize implementation risks.
  • Scale your intelligent automation incrementally.

Provide clear names and descriptions for your tools

When defining tools for an AI agent, use descriptive, concise names that follow these guidelines:

  • Use lowercase, alphanumeric characters (a-z, 0-9)
  • Do not use spaces or special characters
  • The name should directly reflect the tool's function
  • Example tool names:
    • web_search for internet queries
    • code_interpreter for running code
    • document_analysis for parsing documents
    • data_visualization for creating charts

For details, go to the Tools section.

Involve humans to help the agent learn

Bring humans into the agentic loop to help review, approve, and validate the agent's output. You achieve this through escalations. This is crucial to help institute best practices around controlled agency, and to ensure the agent is operating according to plan from the provided instructions.

Use escalations to help inform agent memory. Agents learn from their interactions with tools and humans to help calibrate their plan and ongoing run execution. Each escalation is stored in agent memory, configurable from the escalations panel, and referred to before tools are called to help inform the function call and prevent similar escalations in the future. Agent Memory has two different scopes: design time, while testing an agent, and runtime, based on the specific published version of the agent in a deployed process.

For details, go to the Contexts and Escalations and Agent Memory sections.

Create well-structured instructions and iterate on them

Develop comprehensive instruction sets for AI agents, including clear role definitions, task breakdowns, and reasoning methodologies. Systematically iterate on these instructions, adjusting components to discover the minimal set that consistently produces high-quality, reliable agent behaviors.

For details, see .

Review traces and trace logs

Traces give a comprehensive view of the agent's run and what happened at each step of its loop. Traces provide a good way for you to review your agent's output, assess its plan, and iterate on its structure (such as instructions, tools, context used).

Trace logs are critical diagnostic tools for AI agents, offering:

  • Detailed step-by-step execution breakdown
  • Visibility into decision-making processes
  • Identification of potential failure points or inefficiencies

Regular trace review is essential because:

  • Agents evolve with changing requirements.
  • Unexpected behaviors can emerge over time.
  • Performance optimization requires continuous analysis.
  • Tool effectiveness may degrade or become obsolete.

For details, go to the Traces section.

Evaluate your agent

Create diverse evaluation sets

Agent evaluation requires extensive, representative datasets that challenge the system across multiple dimensions. These evaluation sets should simulate real-world complexity, incorporating variations in:

  • Input complexity
  • Contextual nuances
  • Domain-specific challenges
  • Potential edge cases and failure scenarios

Effective dataset development involves:

  • Consulting domain experts
  • Analyzing historical interaction logs
  • Systematically generating synthetic test cases
  • Incorporating adversarial examples
  • Ensuring statistical diversity

Evaluate multiple characteristics of your agent

Agent evaluation extends beyond simple accuracy measurements. Develop holistic evaluations that consider:

  • Accuracy and factual correctness
  • Reasoning transparency
  • Response creativity
  • Contextual relevance

For details, go to the Evaluations section.

Test your agent

Move beyond isolated agent testing by embedding evaluation processes within broader automation contexts. This means creating an agent and including it in an automation workflow using the Run Agent activity. This approach ensures agents perform reliably when interconnected with other systems.

Testing strategies should include:

  • End-to-end workflow simulations
  • Integration point stress testing
  • Cross-system communication validation
  • Performance under variable load conditions
  • Failure mode and recovery mechanism assessment

For details, see the Running the agent section.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2025 UiPath. All rights reserved.