agents
latest
false
  • Getting started
    • About this guide
    • About agents
    • Agent capabilities in the UiPath Platform™
  • UiPath Agents in Studio Web
  • UiPath Agents in Agent Builder
  • UiPath Coded agents
UiPath logo, featuring letters U and I in white

Agents user guide

Last updated Aug 11, 2025

Best practices for building agents

Design agents for real workflows

Effective agents are not standalone bots, they are embedded within well-understood workflows. Start with a workflow that already exists in your business processes and identify repeatable, rule-based tasks. These are ideal candidates for agent automation.

Before building, perform a workflow mapping exercise:

  • Identify decision points and input/output formats.
  • Look for structured, predictable flows (e.g., ticket triage, compliance checks).
  • Avoid designing agents as a replacement for workflows. Agents are components that enable decision-making within workflows.
Tip: Use plug-and-play agent patterns to scale quickly while managing risk.

Scope narrowly and specialize

Agents perform best when they are task-specific and responsibility-bounded. Avoid overloading agents with multiple roles. A good practice is to keep each agent focused on a single responsibility (e.g., extracting invoice data, classifying customer intent).

Benefits of narrow scoping:

  • Easier debugging and iteration
  • More stable performance and evaluation
  • Lower risk of misalignment or hallucination

Engineer clear and iterative prompts

Prompt design is at the heart of agent behavior. Use agentic prompting, an evolved style of prompting that includes:

  • Defined roles and personas
  • Explicit task breakdowns and reasoning instructions
  • Output formatting and error-handling guidance

For example, instead of asking “Summarize this document,” an agentic prompt might be: ""You are a compliance analyst. Break down this policy document into sections. For each section, summarize in two sentences and flag potential policy gaps. Explain how you segmented the document."

Avoid embedding static examples in prompts. Instead of hardcoding input/output samples into prompts, use structured evaluation sets to test edge cases and scenarios. This makes your agent more robust and easier to iterate.

Iterate on your prompts systematically:

  • Adjust structure, roles, reasoning paths.
  • Track changes in accuracy, coverage, and task success.
  • Use tools like prompt linting and the Agent Builder playground for experimentation.
Tip: Use English as your default language. Agents perform best when prompts, schema definitions, and tool instructions are written in English. Using non-English characters may impact tool calls or function parsing.

Leverage structured inputs

Pre-process data before it hits the agent. Use technologies like Document Understanding and IXP to extract structured inputs. Feeding raw or noisy data to an agent degrades output quality and increases prompt complexity.

Define success criteria and evaluation strategies

Your agents need clear, measurable goals to be reliable. Design KPIs and evaluation mechanisms:

  • Accuracy thresholds
  • Confidence scoring
  • Response consistency
  • Reasoning transparency

Go beyond "did it work" and look at how it worked. Use trace logs and job logs to evaluate tool calls, decision chains, and context usage.

Create evaluation datasets that test edge cases, context variance, and domain-specific conditions. These should include:

  • Adversarial inputs
  • Low-context queries
  • Unexpected formatting
  • System boundary tests

Implement human-in-the-loop (HITL) escalation

Agents should know when to ask for help. Integrate HITL escalation paths to handle:

  • Missing data
  • Tool failures
  • Ambiguous decisions
  • Regulatory or business-rule exceptions

Use the inner loop for fast escalations and the outer loop for structured approvals. Each escalation should:

  • Trigger memory updates.
  • Inform future agent behavior.
  • Support tool usage refinement.

Use guardrails and naming conventions

Define tools with:

  • Lowercase, alphanumeric names
  • No special characters or spaces
  • Descriptive, function-based labels (e.g., web_search, document_analysis)

This ensures compatibility with the LLM’s parsing and calling logic.

Use guardrails to enforce safety and governance:

  • Filter or block unsafe tool inputs.
  • Escalate tool usage to humans when necessary.
  • Log and monitor tool interactions for compliance.

Monitor agents post-deployment

Your agent isn’t done once it’s deployed. Set up a monitoring and recalibration loop:

  • Review performance against runtime data.
  • Look for drift in tool usage or logic.
  • Benchmark consistently.
  • Use the feedback interface on traces to refine agent memory and improve future behaviors.
Note: Avoid modeling long, deterministic workflows with hard business rules directly inside agents. Use standard automation or BPMN orchestration instead.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo
Trust and Security
© 2005-2025 UiPath. All rights reserved.