- Getting started
- Data security and compliance
- Organizations
- Authentication and security
- Licensing
- About licensing
- Unified Pricing: Licensing plan framework
- Flex: Licensing plan framework
- Activating your Enterprise license
- Upgrading and downgrading licenses
- License migration
- Requesting a service trial
- Assigning licenses to tenants
- Assigning user licenses
- Deallocating user licenses
- Monitoring license allocation
- License overallocation
- Licensing notifications
- User license management
- Tenants and services
- Accounts and roles
- Testing in your organization
- AI Trust Layer
- External applications
- Notifications
- Logging
- Troubleshooting
- Migrating to Automation Cloud

Automation Cloud admin guide
- Unified Pricing: Enterprise Platform, Standard Platform, Basic Platform, App Test Platform Enterprise, App Test Platform Standard.
- Flex: Advanced Platform, Flex Standard Platform.
The LLM configurations tab allows you to integrate your existing AI subscriptions while maintaining the governance framework provided by UiPath. You can:
- Replace UiPath LLM subscription: Replace UiPath-managed subscriptions with your own, provided they match the same model family and version already supported by the UiPath product. This allows for seamless swapping of UiPath-managed models with your subscribed models.
- Add your own LLM: Use any LLM that meets the product's compatibility criteria. To ensure smooth integration, your chosen LLM must pass a series of tests initiated through a probe call before it can be used within the UiPath ecosystem.
Configuring LLMs preserves most of the governance benefits of the AI Trust Layer, including policy enforcement via Automation Ops and detailed audit logs. However, model governance policies are specifically designed for UiPath-managed LLMs. This means that if you disable a particular model through an AI Trust Layer policy, the restriction only applies to the UiPath-managed version of that model. Your own configured models of the same type remain unaffected.
When leveraging the option to use your own LLM or subscription, keep the following points in mind:
- Compatibility requirements: Your chosen LLM or subscription must align with the model family and version currently supported by the UiPath product.
-
Setup: Make sure you properly configure and maintain all required LLMs in the custom setup. If any component is missing, outdated, or incorrectly configured, your custom setup may cease to function. In such cases, the system will automatically revert to a UiPath-managed LLM to ensure continuity of service, unless UiPath LLMs are turned off through an Automation Ops policy.
-
Cost-saving: If your custom LLM setup is complete, correct, and meets all necessary requirements, you may be eligible for a Reduced Consumption Rate.
LLM connections rely on Integration Service to establish the connection to your own models. You can create connections to the following providers:
- Azure Open AI
- Open AI
- Amazon Bedrock
- Google Vertex
- Open AI V1 Compliant LLM – Use this option to connect to any LLM provider whose API follows the OpenAI V1 standard. For details, refer to the OpenAI V1 Compliant LLM connector documentation.
To set up a new connection, follow these steps:
You can perform the following actions on your existing connections:
- Check status – Verify the status of your Integration Service connection. This action ensures that the connection is active and functioning correctly.
- Edit – Modify any parameters of your existing connection.
- Disable – Temporarily suspend the connection. When disabled, the connection remains visible in your list but doesn't route any calls. You can re-enable the connection when needed.
- Delete – Permanently remove the connection from your system. This action disables the connection and removes it from your list.
Each product supports specific large language models (LLMs) and versions. Use the table below to identify the supported models and versions for your product.
You can connect your own LLM using one of the following providers: Amazon Web Services, Google Vertex, Microsoft Azure OpenAI, or OpenAI V1 Compliant. Follow the steps outlined in the previous section to create a connection.
| Product | Feature | LLM | Version |
|---|---|---|---|
| Agents1 | Design, Evaluate & Deploy | Anthropic |
anthropic.claude-3.5-sonnet-20240620-v1:0 anthropic.claude-3.5-sonnet-20241022-v2:0 anthropic.claude-3.7-sonnet-20250219-v1:0 anthropic.claude-3-haiku-20240307-v1:0 |
| OpenAI |
gpt-4o-2024-05-13 gpt-4o-2024-08-06 gpt-4o-2024-11-20 gpt-4o-mini-2025-04-14 gpt-4o-mini-2024-07-18 | ||
| Autopilot for everyone | Chat | Anthropic |
anthropic.claude-3.5-sonnet-20240620-v1:0 anthropic.claude-3.7-sonnet-20250219-v1:0 |
| OpenAI | gpt-4o-mini-2024-07-18 | ||
| GenAI Activities | Build, Test & Deploy | Anthropic |
anthropic.claude-3-5-sonnet-20240620-v1:0 anthropic.claude-3-5-sonnet-20241022-v2:0 anthropic.claude-3-7-sonnet-20250219-v1:0 anthropic.claude-sonnet-4-20250514-v1:0 anthropic.claude-sonnet-4-5-20250929-v1:0 anthropic.claude-haiku-4-5-20251001-v1:0 |
|
gemini-2.0-flash-001 gemini-2.5-pro gemini-2.5-flash | |||
| OpenAI |
gpt-5-2025-08-07 gpt-5-mini-2025-08-07 gpt-5-nano-2025-08-07 gpt-5.1-2025-11-13 gpt-4o-2024-11-20 gpt-4o-mini-2024-07-18 | ||
| Healing Agent | Workflow Recovery | gemini-2.5-flash | |
| OpenAI | gpt-4o-2024-08-06 | ||
| UI Automation | ScreenPlay | Anthropic | anthropic.claude-sonnet-4-5-20250929-v1:0 |
| gemini-2.5-flash | |||
| OpenAI |
gpt-4.1-mini-2025-04-14 gpt-4.1-2025-04-14 gpt-5-2025-08-07 gpt-5-mini-2025-08-07 computer-use-preview-2025-03-11 | ||
| Test Manager | Autopilot (generate and manage tests) | Anthropic | anthropic.claude-3.7-sonnet-20250219-v1:0 (to be replaced with anthropic.claude-4.5-sonnet in March 2026) |
|
gemini-2.5-pro gemini-2.5-flash | |||
| OpenAI | gpt-4o-2024-11-20 |
- Tool (function) calling – Your model must be able to call tools or functions during execution.
- Disabling parallel tool calls – If
supported by your LLM provider, the model should offer the option to disable parallel tool calls.
Note: When using custom models, the system cannot determine the model’s true token capacity. Agents default to a 4096 token limit, even if the underlying model supports a higher value. This behavior is intentional, as UiPath cannot infer token limits for customer-defined deployments.