- Getting started
- UiPath Agents in Studio Web
- About UiPath Agents
- Licensing
- Building an agent in Studio Web
- Agents vs. workflows
- Best practices for building agents
- Best practices for context engineering
- Best practices for publishing and deploying agents
- Prompts
- Contexts
- Escalations and Agent Memory
- Evaluations
- Agent traces
- Agent score
- Managing UiPath agents
- UiPath Agents in Agent Builder
- UiPath Coded agents

Agents user guide
Effective agents are not standalone bots, they are embedded within well-understood workflows. Start with a workflow that already exists in your business processes and identify repeatable, rule-based tasks. These are ideal candidates for agent automation.
Before building, perform a workflow mapping exercise:
- Identify decision points and input/output formats.
- Look for structured, predictable flows (e.g., ticket triage, compliance checks).
- Avoid designing agents as a replacement for workflows. Agents are components that enable decision-making within workflows.
Agents perform best when they are task-specific and responsibility-bounded. Avoid overloading agents with multiple roles. A good practice is to keep each agent focused on a single responsibility (e.g., extracting invoice data, classifying customer intent).
Benefits of narrow scoping:
- Easier debugging and iteration
- More stable performance and evaluation
- Lower risk of misalignment or hallucination
Prompt design is at the heart of agent behavior. Use agentic prompting, an evolved style of prompting that includes:
- Defined roles and personas
- Explicit task breakdowns and reasoning instructions
- Output formatting and error-handling guidance
For example, instead of asking “Summarize this document,” an agentic prompt might be: ""You are a compliance analyst. Break down this policy document into sections. For each section, summarize in two sentences and flag potential policy gaps. Explain how you segmented the document."
Avoid embedding static examples in prompts. Instead of hardcoding input/output samples into prompts, use structured evaluation sets to test edge cases and scenarios. This makes your agent more robust and easier to iterate.
Iterate on your prompts systematically:
- Adjust structure, roles, reasoning paths.
- Track changes in accuracy, coverage, and task success.
- Use tools like prompt linting and the Agent Builder playground for experimentation.
Pre-process data before it hits the agent. Use technologies like Document Understanding and IXP to extract structured inputs. Feeding raw or noisy data to an agent degrades output quality and increases prompt complexity.
Your agents need clear, measurable goals to be reliable. Design KPIs and evaluation mechanisms:
- Accuracy thresholds
- Confidence scoring
- Response consistency
- Reasoning transparency
Go beyond "did it work" and look at how it worked. Use trace logs and job logs to evaluate tool calls, decision chains, and context usage.
Create evaluation datasets that test edge cases, context variance, and domain-specific conditions. These should include:
- Adversarial inputs
- Low-context queries
- Unexpected formatting
- System boundary tests
Agents should know when to ask for help. Integrate HITL escalation paths to handle:
- Missing data
- Tool failures
- Ambiguous decisions
- Regulatory or business-rule exceptions
Use the inner loop for fast escalations and the outer loop for structured approvals. Each escalation should:
- Trigger memory updates.
- Inform future agent behavior.
- Support tool usage refinement.
Define tools with:
- Lowercase, alphanumeric names
- No special characters or spaces
- Descriptive, function-based labels
(e.g.,
web_search
,document_analysis
)
This ensures compatibility with the LLM’s parsing and calling logic.
Use guardrails to enforce safety and governance:
- Filter or block unsafe tool inputs.
- Escalate tool usage to humans when necessary.
- Log and monitor tool interactions for compliance.
Your agent isn’t done once it’s deployed. Set up a monitoring and recalibration loop:
- Review performance against runtime data.
- Look for drift in tool usage or logic.
- Benchmark consistently.
- Use the feedback interface on traces to refine agent memory and improve future behaviors.
- Best practices for building agents
- Design agents for real workflows
- Scope narrowly and specialize
- Engineer clear and iterative prompts
- Leverage structured inputs
- Define success criteria and evaluation strategies
- Implement human-in-the-loop (HITL) escalation
- Use guardrails and naming conventions
- Monitor agents post-deployment