- Getting started
- Data security and compliance
- Organizations
- Authentication and security
- Licensing
- About licensing
- Unified Pricing: Licensing plan framework
- Flex: Licensing plan framework
- Activating your Enterprise license
- Upgrading and downgrading licenses
- License migration
- Requesting a service trial
- Assigning licenses to tenants
- Assigning user licenses
- Deallocating user licenses
- Monitoring license allocation
- License overallocation
- Licensing notifications
- User license management
- Tenants and services
- Accounts and roles
- Testing in your organization
- AI Trust Layer
- External applications
- Notifications
- Logging
- Troubleshooting
- Migrating to Automation Cloud

Automation Cloud admin guide
- Unified Pricing: Enterprise Platform, Standard Platform, Basic Platform, App Test Platform Enterprise, App Test Platform Standard.
- Flex: Advanced Platform, Flex Standard Platform.
The LLM configurations tab allows you to integrate your existing AI subscriptions while maintaining the governance framework provided by UiPath. You can:
- Replace UiPath LLM subscription: Replace UiPath-managed subscriptions with your own, provided they match the same model family and version already supported by the UiPath product. This allows for seamless swapping of UiPath-managed models with your subscribed models.
- Add your own LLM: Use any LLM that meets the product's compatibility criteria. To ensure smooth integration, your chosen LLM must pass a series of tests initiated through a probe call before it can be used within the UiPath ecosystem.
Configuring LLMs preserves most of the governance benefits of the AI Trust Layer, including policy enforcement via Automation Ops and detailed audit logs. However, model governance policies are specifically designed for UiPath-managed LLMs. This means that if you disable a particular model through an AI Trust Layer policy, the restriction only applies to the UiPath-managed version of that model. Your own configured models of the same type remain unaffected.
When leveraging the option to use your own LLM or subscription, keep the following points in mind:
- Compatibility requirements: Your chosen LLM or subscription must align with the model family and version currently supported by the UiPath product.
-
Setup: Make sure you properly configure and maintain all required LLMs in the custom setup. If any component is missing, outdated, or incorrectly configured, your custom setup may cease to function. In such cases, the system will automatically revert to a UiPath-managed LLM to ensure continuity of service, unless UiPath LLMs are turned off through an Automation Ops policy.
-
Cost-saving: If your custom LLM setup is complete, correct, and meets all necessary requirements, you may be eligible for a Reduced Consumption Rate.
- Azure Open AI
- Open AI
- Amazon Bedrock
- Google Vertex
- Open AI V1 Compliant LLM – Use this option to connect to any LLM provider whose API follows the OpenAI V1 standard. For details, refer to the OpenAI V1 Compliant LLM connector documentation.
To set up a new connection, follow these steps:
- In Integration Service, create a connection to your LLM provider.
- Choose the folder where the connection will be stored.
- Complete the authentication as required by the selected connector.
- Navigate to Admin > AI Trust Layer > LLM configurations.
- Select the Tenant.
- Select Add configuration.
- Choose the Product (for example, Agents) and Feature (for example, Design, Evaluate & Deploy).
- Select the Connections Folder.
3. Configure the model.
In the Model Configuration section, fill in the following fields:
- LLM Name – This field supports two configuration options, depending on your use case:
- Select a model from the list – Choosing a model from the predefined list replaces the UiPath-managed LLM subscription with your own subscription for that same model. This scenario is referred to as Bring Your Own Subscription (BYOS).
- Add custom alias – Entering a custom name allows you to configure a model that is not included in the predefined list of recommended models
for that product. This scenario is referred to as Bring Your Own Model (BYOM).
Note: The Add a custom alias option is available only for products that support custom models. For example, it does not appear for Autopilot for Everyone, which supports only a limited set of predefined models.
- API Type – The API endpoint supported by the LLM (for example, Open AI Chat Completions). This must match the endpoint exposed by your provider.
- Connector – The Integration Service connector (for example, Microsoft Azure OpenAI, Amazon Bedrock).
- Connection – The specific Integration Service connection created earlier. If no connection is available, create one in Integration Service.
- LLM identifier – The model identifier exactly as it appears in your LLM subscription.
- For Azure-hosted models: enter the model deployment name/identifier.
- For AWS Bedrock cross-region inference: enter the inference profile ID.
- For other providers: use the model name as defined in your subscription.
4. Validate and save.
- Select Test configuration to verify that the endpoint is reachable.
- The platform validates the connectivity.
- Ensuring the correct model is configured remains your responsibility.
- If the validation is successful, select Save to activate the configuration.
You can perform the following actions on your existing connections:
- Check status – Verify the status of your Integration Service connection. This action ensures that the connection is active and functioning correctly.
- Edit – Modify any parameters of your existing connection.
- Disable – Temporarily suspend the connection. When disabled, the connection remains visible in your list but doesn't route any calls. You can re-enable the connection when needed.
- Delete – Permanently remove the connection from your system. This action disables the connection and removes it from your list.
Each product supports specific large language models (LLMs) and versions. Use the table below to identify the supported models and versions for your product.
You can connect your own LLM using one of the following providers: Amazon Web Services, Google Vertex, Microsoft Azure OpenAI, or OpenAI V1 Compliant. Follow the steps outlined in the previous section to create a connection.
| Product | Feature | LLM | Version |
|---|---|---|---|
| Agents1 | Design, Evaluate & Deploy | Anthropic |
anthropic.claude-3.5-sonnet-20240620-v1:0 anthropic.claude-3.5-sonnet-20241022-v2:0 anthropic.claude-3.7-sonnet-20250219-v1:0 anthropic.claude-3-haiku-20240307-v1:0 |
| OpenAI |
gpt-4o-2024-05-13 gpt-4o-2024-08-06 gpt-4o-2024-11-20 gpt-4o-mini-2025-04-14 gpt-4o-mini-2024-07-18 | ||
| Autopilot for everyone | Chat | Anthropic |
anthropic.claude-3.5-sonnet-20240620-v1:0 anthropic.claude-3.7-sonnet-20250219-v1:0 |
| OpenAI | gpt-4o-mini-2024-07-18 | ||
| Coded agents | Call LLM | Anthropic |
anthropic.claude-sonnet-4-20250514-v1:0 anthropic.claude-sonnet-4-5-20250929-v1:0 anthropic.claude-haiku-4-5-20251001-v1:0 |
|
gemini-2.5-flash gemini-2.5-pro | |||
| OpenAI |
gpt-4.1-2025-04-14 gpt-4.1-mini-2025-04-14 gpt-5-2025-08-07 gpt-5-mini-2025-08-07 gpt-5.1-2025-11-13 | ||
| GenAI Activities | Build, Test & Deploy | Anthropic |
anthropic.claude-3-5-sonnet-20240620-v1:0 anthropic.claude-3-5-sonnet-20241022-v2:0 anthropic.claude-3-7-sonnet-20250219-v1:0 anthropic.claude-sonnet-4-20250514-v1:0 anthropic.claude-sonnet-4-5-20250929-v1:0 anthropic.claude-haiku-4-5-20251001-v1:0 |
|
gemini-2.0-flash-001 gemini-2.5-pro gemini-2.5-flash | |||
| OpenAI |
gpt-5-2025-08-07 gpt-5-mini-2025-08-07 gpt-5-nano-2025-08-07 gpt-5.1-2025-11-13 gpt-4o-2024-11-20 gpt-4o-mini-2024-07-18 | ||
| Healing Agent | Workflow Recovery | gemini-2.5-flash | |
| OpenAI | gpt-4o-2024-08-06 | ||
| UI Automation | ScreenPlay | Anthropic | anthropic.claude-sonnet-4-5-20250929-v1:0 |
| gemini-2.5-flash | |||
| OpenAI |
gpt-4.1-mini-2025-04-14 gpt-4.1-2025-04-14 gpt-5-2025-08-07 gpt-5-mini-2025-08-07 computer-use-preview-2025-03-11 | ||
| Semantic selectors | gemini-2.5-flash | ||
| Test Manager | Autopilot
| Anthropic | anthropic.claude-3.7-sonnet-20250219-v1:0 (to be replaced with anthropic.claude-4.5-sonnet in March 2026) |
|
gemini-2.5-pro gemini-2.5-flash | |||
| OpenAI | gpt-4o-2024-11-20 |
- Tool (function) calling – Your model must be able to call tools or functions during execution.
- Disabling parallel tool calls – If supported by your LLM provider, the model should offer the option to disable parallel tool calls.
Note: When using custom models, the system cannot determine the model’s true token capacity. Agents default to a 4096 token limit, even if the underlying model supports a higher value. This behavior is intentional, as UiPath cannot infer token limits for customer-defined deployments.