agents
latest
false
  • Getting started
    • About this guide
    • About agents
    • Agent capabilities in the UiPath Platform™
  • UiPath Agents in Studio Web
  • UiPath Agents in Agent Builder
  • UiPath Coded agents
UiPath logo, featuring letters U and I in white

Agents user guide

Last updated Aug 25, 2025

Conversational agents

Note: Conversational agents are available in public preview.

About UiPath conversational agents

Conversational agents are a new class of UiPath agents designed to engage in dynamic, multi-turn, and real-time dialogues with users. Unlike autonomous agents that respond to a single prompt, conversational agents interpret and react to a continuous stream of user messages. They manage conversation context, tool execution, human escalations, and memory, enabling a richer, more adaptive automation experience. Think of them as intelligent digital assistants that understand context and handle ambiguity naturally.

Conversational agents are particularly useful for scenarios that require:

  • Ongoing clarification or back-and-forth exchange
  • Personalized guidance based on user intent
  • Seamless human fallback when confidence is low
Table 1. Key differences from autonomous agents
FeatureConversational agentAutonomous agent
Interaction modelMulti-turn, back-and-forth dialogueSingle-turn, task execution based on an initial prompt
Primary use caseReal-time user support and assistance, interactive information gatheringExecuting a task from a defined prompt
User inputContinuous user chat messagesSingle structured prompt
Core strengthMaintaining conversation and handling ambiguityExecuting a plan across tools

When to use conversational agents

Use conversational agents when your automation scenario involves real-time, context-aware interaction. These agents are best suited for:

  • Self-service experiences for customers or employees, such as helpdesk support or onboarding assistants.
  • Interactive guidance through multi-step processes, forms, or decision trees.
  • Contextual conversations where users may ask follow-up questions or provide information incrementally.
  • Natural language interfaces for applications, systems, or knowledge bases, allowing users to query information conversationally.

Use autonomous agents instead when the task can be fully described in a single prompt with all required inputs provided upfront. Ideal examples include:

  • Structured document processing (e.g., extracting data from invoices or contracts)
  • Automated report generation based on predefined logic
  • Summarization or transformation tasks with clear, one-shot requirements

Licensing

During the public preview, conversational agent executions do not consume Platform or Agentic units. However, if the agent uses a paid service, such as a DeepRAG tool, it consumes the necessary units to make the call to that tool.

Official licensing details for conversational agents become available with the 2025.10 general availability.

Note: Starting a conversation with a conversational agent triggers an Orchestrator process dedicated to that chat. This process always appears as Running so it can respond to your messages immediately. However, it only consumes resources when you actually send a message. While idle and waiting, it does not use any resources.
In the meantime, consider the following:

Best practices

When designing a conversational agent, consider the following best practices:

  1. Start with a clear persona: Define the agent’s tone and scope (e.g., “You are a friendly HR assistant...”).
  2. Design for unpredictability: Users may provide incomplete or incorrect information. Handle ambiguity gracefully.
  3. Guide tool use: Ensure tool descriptions clearly state when and how to use them.
  4. Iterate with evaluations: Create test cases for both happy and unhappy paths. Update your agent logic accordingly.

Getting started with conversational agents

Building a conversational agent follows a structured lifecycle that includes design, testing, deployment, and monitoring. The key steps are:

  1. Design the agent: Use Studio Web to define the agent’s system prompt, configure available tools, add context grounding, and set up escalation workflows.
  2. Test and evaluate: Use the built-in Debug chat to test multi-turn interactions. Add real or simulated conversations to evaluation sets to validate behavior and performance.
  3. Publish and deploy: Publish the agent as a solution package to Orchestrator. Ensure the solution folder includes a serverless and unattended robot for execution.
  4. Access and manage: Interact with the agent through Instance Management. Monitor runtime behavior, review trace logs, and iterate based on feedback.

Building a conversational agent

You can create conversational agents using the same low-code designer in Studio Web as autonomous agents, with key differences tailored for real-time, multi-turn dialogue.

Create the agent

To get started:

  1. Go to studio.uipath.com.
  2. Select Create New button, then select Agent.
  3. Select the Conversational agent type.

    Optionally, describe your agent to Autopilot to generate a starter configuration

Figure 1. Creating a new conversational agent

Configure the system prompt

The system prompt defines the agent's persona, goals, behavioral constraints, and tool/escalation logic. Use it to instruct the agent on how to:

  • Greet users.
  • Handle unknown queries.
  • Escalate issues or call tools.
  • Maintain a consistent tone and style.
Tip: Autopilot can help generate an effective starting prompt based on your use case.

Conversational agents do not use user prompts or Data Manager inputs/outputs. All input is collected live during the conversation.

Configure tools

Tools allow agents to take action during a conversation, such as executing automations, running processes, or calling APIs. Supported tool types include: RPA workflows, API workflows, activities, and other agents (except conversational agents).

Tip: For workflows that perform API calls exclusively, we recommend using API workflows for best performance in real-time chats.

Use tool-level guardrails to enforce runtime policies. Guardrails apply during both test and runtime, and are visible in trace logs. For details, refer to Guardrails.

Configure contexts

Add Context Grounding indexes to give your agent access to specific knowledge sources. The agent can query these indexes to provide informed, citation-backed responses. For details, refer to Contexts.

Configure escalations and agent memory

Conversational agents support escalation workflows and agent memory to improve decision-making:

  • Escalations allow the agent to hand off conversations to a human via Action Center when confidence is low or user intent is unclear. Conversations run synchronously, which means the agent pauses all further interaction until the escalation is resolved.
  • Agent Memory enables the agent to remember and reuse previously resolved escalations, reducing redundancy and improving efficiency.

For details, refer to Escalations and Agent Memory.

Evaluate and test the agent

Evaluations help ensure your conversational agent behaves reliably across varied dialogue paths. The process is similar to evaluating an autonomous agent but is adapted for dialogue.

Use the Output panel to simulate real conversations. Select Test on cloud to run the agent in a chat-like environment, and interact with your agent using natural language.

See real-time execution logs

On the right-hand side of the chat, expand the full execution trace, which provides real-time tracing of the agent execution. It shows details such as:
  • Agent LLM calls and responses
  • Tool calls, with arguments and final outputs

Add test cases

You can add test cases directly from the Output panel by selecting Add to evaluation set after a test run. An evaluation test is created for the conversation with:

  • Conversation history: A record of the preceding turns in the dialogue.
  • Current user message: The user's latest message in the conversation.
  • Expected agent response.

This allows you to test how well the agent maintains context and handles follow-up questions, which is essential for a good conversational experience.

Figure 2. Creating evaluation sets

For each evaluation test, you can use the conversation builder interface to edit the conversation history and the current user prompt. A similar interface lets you define and refine the expected agent response, ensuring accurate test validation.

Figure 3. The conversation builder window while editing an evaluation test

Figure 4. You can import a conversation from the Debug Chat by selecting Add to evaluation set

Accessing conversational agents

After you publish and deploy a conversational agent, you can interact with it through Instance Management, located in the Agents section of Automation Cloud.

Figure 5. Agents Instance Management

FAQs

Why don’t conversational agents use Data Manager arguments or user prompts?

Conversational agents are designed for live, multi-turn interactions, where user input is provided incrementally during the conversation. As a result:

  • User prompts are not required: These agents do not rely on predefined prompts to collect input. Instead, they consume messages in real time and respond accordingly, one turn at a time.
  • Data Manager is currently disabled. Since outputs are emitted dynamically by the conversational agent throughout the conversation, there is no need to configure output arguments. The ability to configure inputs, which would be high-level parameters to initialize a conversation, will be added in a future release.

Why are there so many Orchestrator jobs for conversational agents?

Each time a user starts a conversation with a conversational agent, a new Orchestrator job is created to handle that session. These jobs:

  • Remain active for up to eight hours of inactivity.
  • Automatically terminate if no further user input is received within that window.
  • Help minimize start-up time by keeping the agent session active and responsive during ongoing conversations.

This behavior ensures the agent is always ready to respond without delay, while also optimizing resource usage during idle periods.

How can I use conversational agents outside of Instance Management?

Conversational agents can be accessed beyond Instance Management in:

  • UiPath Apps: Conversational agents will be embeddable in your applications using the iFrame component in a future release.
  • Autopilot for Everyone: Conversational agents will be supported as part of a future Autopilot for Everyone release.

Why won’t my conversational agent start from Instance Management?

If your conversational agent fails to start in Instance Management, check the following requirements:

  • Make sure you have assigned both a serverless robot and an unattended robot to the solution folder that contains your agent.
  • Ensure your tenant has sufficient Robot Units available to support runtime execution.

Without these resources, the agent won’t initialize or run when triggered from Instance Management.

  • Conversational agents
  • About UiPath conversational agents
  • Licensing
  • Best practices
  • Getting started with conversational agents
  • Building a conversational agent
  • FAQs

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo
Trust and Security
© 2005-2025 UiPath. All rights reserved.