agents
latest
false
  • Getting started
    • About this guide
    • About agents
    • Agent capabilities in the UiPath Platform™
  • UiPath Agents in Studio Web
  • UiPath Agents in Agent Builder
  • UiPath Coded agents
UiPath logo, featuring letters U and I in white

Agents user guide

Last updated Sep 25, 2025

Conversational agents

Note: Conversational agents are available in public preview.

About UiPath conversational agents

Conversational agents are a new class of UiPath agents designed to engage in dynamic, multi-turn, and real-time dialogues with users. Unlike autonomous agents that respond to a single prompt, conversational agents interpret and react to a continuous stream of user messages. They manage conversation context, tool execution, human escalations, and memory, enabling a richer, more adaptive automation experience. Think of them as intelligent digital assistants that understand context and handle ambiguity naturally.

Conversational agents are particularly useful for scenarios that require:

  • Ongoing clarification or back-and-forth exchange
  • Personalized guidance based on user intent
  • Seamless human fallback when confidence is low
Table 1. Key differences from autonomous agents
FeatureConversational agentAutonomous agent
Interaction modelMulti-turn, back-and-forth dialogueSingle-turn, task execution based on an initial prompt
Primary use caseReal-time user support and assistance, interactive information gathering Executing a task from a defined prompt
User inputContinuous user chat messagesSingle structured prompt
Core strengthMaintaining conversation and handling ambiguityExecuting a plan across tools

When to use conversational agents

Use conversational agents when your automation scenario involves real-time, context-aware interaction. These agents are best suited for:

  • Self-service experiences for customers or employees, such as helpdesk support or onboarding assistants.
  • Interactive guidance through multi-step processes, forms, or decision trees.
  • Contextual conversations where users may ask follow-up questions or provide information incrementally.
  • Natural language interfaces for applications, systems, or knowledge bases, allowing users to query information conversationally.

Use autonomous agents instead when the task can be fully described in a single prompt with all required inputs provided upfront. Ideal examples include:

  • Structured document processing (e.g., extracting data from invoices or contracts)
  • Automated report generation based on predefined logic
  • Summarization or transformation tasks with clear, one-shot requirements
How do conversational agents relate to Autopilot for Everyone?

With several chat experiences available, it's important to know which one to use and when.

Conversational Agents vs. Autopilot for Everyone:

  • Working together: These two experiences work side-by-side. Conversational agents are not a replacement for Autopilot for Everyone.
  • Different purposes: Think of Autopilot for Everyone as UiPath's general-purpose agent, optimized for productivity tasks and interacting with the UiPath platform. Conversational agents are specialists that you build for a specific use case (e.g., an HR policy assistant).
  • Access: You can access your specialized conversational agents directly from within Autopilot for Everyone, making it a central hub for all your conversational needs.

Conversational Agents vs. specialized Autopilots:

  • Overlap: Both are designed for use-case-specific purposes.

  • Our recommendation: We recommend building with conversational agents. They provide a much richer and more robust design-time experience for building, testing, and refining your use-case-specific agent.

  • Key difference: Conversational agents do not currently support file uploads or local desktop automation, whereas specialized autopilots do.

Licensing

During the public preview, conversational agent executions do not consume Platform or Agent units. However, if the agent uses a paid service, such as a DeepRAG tool, it consumes the necessary units to make the call to that tool.

Official licensing details for conversational agents will become available with the 2025.10 general availability.

Note: Starting a conversation with a conversational agent triggers an Orchestrator process dedicated to that chat. This process always appears as Running so it can respond to your messages immediately. However, it only consumes resources when you actually send a message. While idle and waiting, it does not use any resources.
In the meantime, consider the following:

Best practices

When designing a conversational agent, consider the following best practices:

  1. Start with a clear persona: Define the agent’s tone and scope (e.g., “You are a friendly HR assistant...”).
  2. Design for unpredictability: Users may provide incomplete or incorrect information. Handle ambiguity gracefully.
  3. Guide tool use: Ensure tool descriptions clearly state when and how to use them.
  4. Iterate with evaluations: Create test cases for both happy and unhappy paths. Update your agent logic accordingly.

Getting started with conversational agents

Building a conversational agent follows a structured lifecycle that includes design, testing, deployment, and monitoring. The key steps are:

  1. Design the agent: Use Studio Web to define the agent’s system prompt, configure available tools, add context grounding, and set up escalation workflows.
  2. Test and evaluate: Use the built-in Debug chat to test multi-turn interactions. Add real or simulated conversations to evaluation sets to validate behavior and performance.
  3. Publish and deploy: Publish the agent as a solution package to Orchestrator. Ensure the solution folder includes a serverless and unattended robot for execution.
  4. Access and manage: Interact with the agent through Instance Management. Monitor runtime behavior, review trace logs, and iterate based on feedback.

Building a conversational agent

You can create conversational agents using the same low-code designer in Studio Web as autonomous agents, with key differences tailored for real-time, multi-turn dialogue.

Create the agent

To get started:

  1. Go to studio.uipath.com.
  2. Select Create New button, then select Agent.
  3. Select the Conversational agent type.

    Optionally, describe your agent to Autopilot to generate a starter configuration

Figure 1. Creating a new conversational agent

Configure the system prompt

The system prompt defines the agent's persona, goals, behavioral constraints, and tool/escalation logic. Use it to instruct the agent on how to:

  • Greet users.
  • Handle unknown queries.
  • Escalate issues or call tools.
  • Maintain a consistent tone and style.
Tip: Autopilot can help generate an effective starting prompt based on your use case.

Conversational agents do not use user prompts or Data Manager inputs/outputs. All input is collected live during the conversation.

Configure tools

Tools allow agents to take action during a conversation, such as executing automations, running processes, or calling APIs. Supported tool types include: RPA workflows, API workflows, activities, and other agents (except conversational agents).

Tip: For workflows that perform API calls exclusively, we recommend using API workflows for best performance in real-time chats.

Use tool-level guardrails to enforce runtime policies. Guardrails apply during both test and runtime, and are visible in trace logs. For details, refer to Guardrails.

Configure contexts

Add Context Grounding indexes to give your agent access to specific knowledge sources. The agent can query these indexes to provide informed, citation-backed responses. For details, refer to Contexts.

Configure escalations and agent memory

Conversational agents support escalation workflows and agent memory to improve decision-making:

  • Escalations allow the agent to hand off conversations to a human via Action Center when confidence is low or user intent is unclear. Conversations run synchronously, which means the agent pauses all further interaction until the escalation is resolved.
  • Agent Memory enables the agent to remember and reuse previously resolved escalations, reducing redundancy and improving efficiency.

For details, refer to Escalations and Agent Memory.

Evaluate and test the agent

Evaluations help ensure your conversational agent behaves reliably across varied dialogue paths. The process is similar to evaluating an autonomous agent but is adapted for dialogue.

Use the Output panel to simulate real conversations. Select Test on cloud to run the agent in a chat-like environment, and interact with your agent using natural language.

See real-time execution logs

On the right-hand side of the chat, expand the full execution trace, which provides real-time tracing of the agent execution. It shows details such as:
  • Agent LLM calls and responses
  • Tool calls, with arguments and final outputs

Add test cases

You can add test cases directly from the Output panel by selecting Add to evaluation set after a test run. An evaluation test is created for the conversation with:

  • Conversation history: A record of the preceding turns in the dialogue.
  • Current user message: The user's latest message in the conversation.
  • Expected agent response.

This allows you to test how well the agent maintains context and handles follow-up questions, which is essential for a good conversational experience.

Figure 2. Creating evaluation sets

For each evaluation test, you can use the conversation builder interface to edit the conversation history and the current user prompt. A similar interface lets you define and refine the expected agent response, ensuring accurate test validation.

Figure 3. The conversation builder window while editing an evaluation test

Figure 4. You can import a conversation from the Debug Chat by selecting Add to evaluation set

Accessing conversational agents

After you publish and deploy a conversational agent, you can interact with it through Instance Management, located in the Agents section of Automation Cloud.

Figure 5. Agents Instance Management

Embedding conversational agents in UiPath Apps

You can also embed a conversational agent directly into a UiPath App using the IFrame component.

  1. Create and publish: First, ensure your conversational agent has been created and published.
  2. Add IFrame: Open your App in Studio and add an IFrame component to your page.
  3. Configure the URL: Set the IFrame's Source property to a URL constructed with the following format and parameters: "https://<cloud_env>.uipath.com/<organization>/<tenant>/autopilotforeveryone_/conversational-agents/?agentId=<agent_id>&mode=embedded&title=<title>&welcomeTitle=<welcome_title>&welcomeDescription=<welcome_description>&suggestions=<suggestions>"

    See the following table for details.

  4. Publish app: Publish your App. The agent is now be embedded and ready to use!
Table 2. URL parameters
ParameterRequiredDescription
agentIdYesThe Release ID of the published agent. To find it, navigate to Agents > Conversational agents, click "Chat now" on your agent, and copy the ID from the URL.
modeNo
  • Set to embedded to optimize as a right-rail experience within the IFrame.
  • Set to fullscreen to optimize as a full-screen experience within the IFrame.
The default is fullscreen.
titleNoThe title displayed in the chat component's header. Defaults to the agent's name.
welcomeTitleNoA title for the first-run welcome screen. Defaults to an empty string.
welcomeDescriptionNoA description for the first-run welcome screen. Defaults to an empty string.
suggestionsNoAn array of first-run suggested prompts for the user. Defaults to an empty array []

Note:

  • To test on browser directly, wrap in 1 double quotes. Example: ["Hi, what can you do", "Hello, how are you"]
  • For Apps embedding, wrap in 2 double quotes – you need to escape the string or else UiPath Apps will throw a validation error.(e.g., [""Hi, what can you do""]).
showHistoryNoA boolean (true or false) to control the visibility of the chat history panel. Defaults to true.

Limitations

Conversational agents are currently in public preview. We are actively working on adding new capabilities. Please be aware of the following limitations.
CapabilityDescription
User promptUser prompts are not required: These agents do not rely on predefined prompts to collect input. Instead, they consume messages in real time and respond accordingly, one turn at a time.
Data managerData Manager is currently disabled. Since outputs are emitted dynamically by the conversational agent throughout the conversation, there is no need to configure output arguments. The ability to configure inputs, which would be high-level parameters to initialize a conversation, will be available in a future release.
File uploadsYou cannot upload files (e.g., PDFs, images) to the agent during a conversation.

The ability to upload files will be available in a future release.

Local desktop automationThe agent cannot run automations that interact with the your local desktop (e.g., via Assistant).
Personal connectionsTools cannot be run using your personal Integration Service connections. Only Shared connections can be used currently.
Tool confirmationThe agent does not ask for confirmation before executing a tool.
Voice interaction

You can only interact with the agent via text commands.

Push-to-talk and two-way voice interaction will be available in preview in a future release.

Agent health scoreThe Agent Score feature for performance evaluation is not yet available.
Instance managementAdvanced observability features for monitoring agent performance are not yet available.
User feedbackYou cannot provide feedback (e.g., thumbs up/down) on agent responses.
SDKsHeadless and UI SDKs for embedding agents into third-party external applications are not yet available.
Third-party integrationsConversational agent access through surfaces like Slack, Microsoft Teams, or MSFT Copilot is not yet available.
LicensingOfficial licensing details will be finalized for the general availability release.

FAQs

Why are there so many Orchestrator jobs for conversational agents?

Each time a user starts a conversation with a conversational agent, a new Orchestrator job is created to handle that session. These jobs:

  • Remain active for up to eight hours of inactivity.

  • Automatically terminate if no further user input is received within that window.

  • Help minimize start-up time by keeping the agent session active and responsive during ongoing conversations.

This behavior ensures the agent is always ready to respond without delay, while also optimizing resource usage during idle periods.

Why won’t my conversational agent start from Instance Management?

If your conversational agent fails to start in Instance Management, check the following requirements:

  • Make sure you have assigned both a serverless robot and an unattended robot to the solution folder that contains your agent.

  • Ensure your tenant has sufficient Robot Units available to support runtime execution.

Without these resources, the agent won’t initialize or run when triggered from Instance Management.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo
Trust and Security
© 2005-2025 UiPath. All rights reserved.