- Getting started
- UiPath Agents in Studio Web
- UiPath Agents in Agent Builder
- UiPath Coded agents

Agents user guide
Conversational agents are a new class of UiPath agents designed to engage in dynamic, multi-turn, and real-time dialogues with users. Unlike autonomous agents that respond to a single prompt, conversational agents interpret and react to a continuous stream of user messages. They manage conversation context, tool execution, human escalations, and memory, enabling a richer, more adaptive automation experience. Think of them as intelligent digital assistants that understand context and handle ambiguity naturally.
Conversational agents are particularly useful for scenarios that require:
- Ongoing clarification or back-and-forth exchange
- Personalized guidance based on user intent
- Seamless human fallback when confidence is low
Feature | Conversational agent | Autonomous agent |
---|---|---|
Interaction model | Multi-turn, back-and-forth dialogue | Single-turn, task execution based on an initial prompt |
Primary use case | Real-time user support and assistance, interactive information gathering | Executing a task from a defined prompt |
User input | Continuous user chat messages | Single structured prompt |
Core strength | Maintaining conversation and handling ambiguity | Executing a plan across tools |
When to use conversational agents
Use conversational agents when your automation scenario involves real-time, context-aware interaction. These agents are best suited for:
- Self-service experiences for customers or employees, such as helpdesk support or onboarding assistants.
- Interactive guidance through multi-step processes, forms, or decision trees.
- Contextual conversations where users may ask follow-up questions or provide information incrementally.
- Natural language interfaces for applications, systems, or knowledge bases, allowing users to query information conversationally.
Use autonomous agents instead when the task can be fully described in a single prompt with all required inputs provided upfront. Ideal examples include:
- Structured document processing (e.g., extracting data from invoices or contracts)
- Automated report generation based on predefined logic
- Summarization or transformation tasks with clear, one-shot requirements
With several chat experiences available, it's important to know which one to use and when.
Conversational Agents vs. Autopilot for Everyone:
- Working together: These two experiences work side-by-side. Conversational agents are not a replacement for Autopilot for Everyone.
- Different purposes: Think of Autopilot for Everyone as UiPath's general-purpose agent, optimized for productivity tasks and interacting with the UiPath platform. Conversational agents are specialists that you build for a specific use case (e.g., an HR policy assistant).
- Access: You can access your specialized conversational agents directly from within Autopilot for Everyone, making it a central hub for all your conversational needs.
Conversational Agents vs. specialized Autopilots:
-
Overlap: Both are designed for use-case-specific purposes.
-
Our recommendation: We recommend building with conversational agents. They provide a much richer and more robust design-time experience for building, testing, and refining your use-case-specific agent.
-
Key difference: Conversational agents do not currently support file uploads or local desktop automation, whereas specialized autopilots do.
During the public preview, conversational agent executions do not consume Platform or Agent units. However, if the agent uses a paid service, such as a DeepRAG tool, it consumes the necessary units to make the call to that tool.
Official licensing details for conversational agents will become available with the 2025.10 general availability.
When designing a conversational agent, consider the following best practices:
- Start with a clear persona: Define the agent’s tone and scope (e.g., “You are a friendly HR assistant...”).
- Design for unpredictability: Users may provide incomplete or incorrect information. Handle ambiguity gracefully.
- Guide tool use: Ensure tool descriptions clearly state when and how to use them.
- Iterate with evaluations: Create test cases for both happy and unhappy paths. Update your agent logic accordingly.
Building a conversational agent follows a structured lifecycle that includes design, testing, deployment, and monitoring. The key steps are:
- Design the agent: Use Studio Web to define the agent’s system prompt, configure available tools, add context grounding, and set up escalation workflows.
- Test and evaluate: Use the built-in Debug chat to test multi-turn interactions. Add real or simulated conversations to evaluation sets to validate behavior and performance.
- Publish and deploy: Publish the agent as a solution package to Orchestrator. Ensure the solution folder includes a serverless and unattended robot for execution.
- Access and manage: Interact with the agent through Instance Management. Monitor runtime behavior, review trace logs, and iterate based on feedback.
You can create conversational agents using the same low-code designer in Studio Web as autonomous agents, with key differences tailored for real-time, multi-turn dialogue.
Create the agent
To get started:
- Go to studio.uipath.com.
- Select Create New button, then select Agent.
- Select the Conversational agent type.
Optionally, describe your agent to Autopilot to generate a starter configuration
Configure the system prompt
The system prompt defines the agent's persona, goals, behavioral constraints, and tool/escalation logic. Use it to instruct the agent on how to:
- Greet users.
- Handle unknown queries.
- Escalate issues or call tools.
- Maintain a consistent tone and style.
Conversational agents do not use user prompts or Data Manager inputs/outputs. All input is collected live during the conversation.
Configure tools
Tools allow agents to take action during a conversation, such as executing automations, running processes, or calling APIs. Supported tool types include: RPA workflows, API workflows, activities, and other agents (except conversational agents).
Use tool-level guardrails to enforce runtime policies. Guardrails apply during both test and runtime, and are visible in trace logs. For details, refer to Guardrails.
Configure contexts
Add Context Grounding indexes to give your agent access to specific knowledge sources. The agent can query these indexes to provide informed, citation-backed responses. For details, refer to Contexts.
Configure escalations and agent memory
Conversational agents support escalation workflows and agent memory to improve decision-making:
- Escalations allow the agent to hand off conversations to a human via Action Center when confidence is low or user intent is unclear. Conversations run synchronously, which means the agent pauses all further interaction until the escalation is resolved.
- Agent Memory enables the agent to remember and reuse previously resolved escalations, reducing redundancy and improving efficiency.
For details, refer to Escalations and Agent Memory.
Evaluate and test the agent
Evaluations help ensure your conversational agent behaves reliably across varied dialogue paths. The process is similar to evaluating an autonomous agent but is adapted for dialogue.
Use the Output panel to simulate real conversations. Select Test on cloud to run the agent in a chat-like environment, and interact with your agent using natural language.
See real-time execution logs
- Agent LLM calls and responses
- Tool calls, with arguments and final outputs
Add test cases
You can add test cases directly from the Output panel by selecting Add to evaluation set after a test run. An evaluation test is created for the conversation with:
- Conversation history: A record of the preceding turns in the dialogue.
- Current user message: The user's latest message in the conversation.
- Expected agent response.
This allows you to test how well the agent maintains context and handles follow-up questions, which is essential for a good conversational experience.
For each evaluation test, you can use the conversation builder interface to edit the conversation history and the current user prompt. A similar interface lets you define and refine the expected agent response, ensuring accurate test validation.
Accessing conversational agents
After you publish and deploy a conversational agent, you can interact with it through Instance Management, located in the Agents section of Automation Cloud.
Embedding conversational agents in UiPath Apps
You can also embed a conversational agent directly into a UiPath App using the IFrame component.
- Create and publish: First, ensure your conversational agent has been created and published.
- Add IFrame: Open your App in Studio and add an IFrame component to your page.
- Configure the URL: Set the IFrame's Source property to a URL
constructed with the following format and parameters:
"https://<cloud_env>.uipath.com/<organization>/<tenant>/autopilotforeveryone_/conversational-agents/?agentId=<agent_id>&mode=embedded&title=<title>&welcomeTitle=<welcome_title>&welcomeDescription=<welcome_description>&suggestions=<suggestions>"
See the following table for details.
- Publish app: Publish your App. The agent is now be embedded and ready to use!
Parameter | Required | Description |
---|---|---|
agentId | Yes | The Release ID of the published agent. To find it, navigate to Agents > Conversational agents, click "Chat now" on your agent, and copy the ID from the URL. |
mode | No |
The default is
fullscreen .
|
title | No | The title displayed in the chat component's header. Defaults to the agent's name. |
welcomeTitle | No | A title for the first-run welcome screen. Defaults to an empty string. |
welcomeDescription | No | A description for the first-run welcome screen. Defaults to an empty string. |
suggestions | No | An array of first-run suggested prompts for the user.
Defaults to an empty array [] Note:
|
showHistory | No | A boolean (true or
false ) to control the visibility of
the chat history panel. Defaults to
true .
|
Capability | Description |
---|---|
User prompt | User prompts are not required: These agents do not rely on predefined prompts to collect input. Instead, they consume messages in real time and respond accordingly, one turn at a time. |
Data manager | Data Manager is currently disabled. Since outputs are emitted dynamically by the conversational agent throughout the conversation, there is no need to configure output arguments. The ability to configure inputs, which would be high-level parameters to initialize a conversation, will be available in a future release. |
File uploads | You cannot upload files (e.g., PDFs, images) to the agent
during a conversation.
The ability to upload files will be available in a future release. |
Local desktop automation | The agent cannot run automations that interact with the your local desktop (e.g., via Assistant). |
Personal connections | Tools cannot be run using your personal Integration Service connections. Only Shared connections can be used currently. |
Tool confirmation | The agent does not ask for confirmation before executing a tool. |
Voice interaction |
You can only interact with the agent via text commands. Push-to-talk and two-way voice interaction will be available in preview in a future release. |
Agent health score | The Agent Score feature for performance evaluation is not yet available. |
Instance management | Advanced observability features for monitoring agent performance are not yet available. |
User feedback | You cannot provide feedback (e.g., thumbs up/down) on agent responses. |
SDKs | Headless and UI SDKs for embedding agents into third-party external applications are not yet available. |
Third-party integrations | Conversational agent access through surfaces like Slack, Microsoft Teams, or MSFT Copilot is not yet available. |
Licensing | Official licensing details will be finalized for the general availability release. |
Why are there so many Orchestrator jobs for conversational agents?
Each time a user starts a conversation with a conversational agent, a new Orchestrator job is created to handle that session. These jobs:
-
Remain active for up to eight hours of inactivity.
-
Automatically terminate if no further user input is received within that window.
-
Help minimize start-up time by keeping the agent session active and responsive during ongoing conversations.
This behavior ensures the agent is always ready to respond without delay, while also optimizing resource usage during idle periods.
Why won’t my conversational agent start from Instance Management?
If your conversational agent fails to start in Instance Management, check the following requirements:
-
Make sure you have assigned both a serverless robot and an unattended robot to the solution folder that contains your agent.
-
Ensure your tenant has sufficient Robot Units available to support runtime execution.
Without these resources, the agent won’t initialize or run when triggered from Instance Management.