automation-cloud
latest
false
UiPath logo, featuring letters U and I in white

Automation Cloud admin guide

Last updated Mar 12, 2026

Configuring LLMs

Note: LLM configurations is available on the following licensing plans:
  • Unified Pricing: Enterprise Platform, Standard Platform, Basic Platform, App Test Platform Enterprise, App Test Platform Standard.
  • Flex: Advanced Platform, Flex Standard Platform.

The LLM configurations tab allows you to integrate your existing AI subscriptions while maintaining the governance framework provided by UiPath. You can:

  • Replace UiPath LLM subscription: Replace UiPath-managed subscriptions with your own, provided they match the same model family and version already supported by the UiPath product. This allows for seamless swapping of UiPath-managed models with your subscribed models.
  • Add your own LLM: Use any LLM that meets the product's compatibility criteria. To ensure smooth integration, your chosen LLM must pass a series of tests initiated through a probe call before it can be used within the UiPath ecosystem.

Configuring LLMs preserves most of the governance benefits of the AI Trust Layer, including policy enforcement via Automation Ops and detailed audit logs. However, model governance policies are specifically designed for UiPath-managed LLMs. This means that if you disable a particular model through an AI Trust Layer policy, the restriction only applies to the UiPath-managed version of that model. Your own configured models of the same type remain unaffected.

When leveraging the option to use your own LLM or subscription, keep the following points in mind:

  • Compatibility requirements: Your chosen LLM or subscription must align with the model family and version currently supported by the UiPath product.
  • Setup: Make sure you properly configure and maintain all required LLMs in the custom setup. If any component is missing, outdated, or incorrectly configured, your custom setup may cease to function. In such cases, the system will automatically revert to a UiPath-managed LLM to ensure continuity of service, unless UiPath LLMs are turned off through an Automation Ops policy.

  • Cost-saving: If your custom LLM setup is complete, correct, and meets all necessary requirements, you may be eligible for a Reduced Consumption Rate.

Setting up an LLM configuration

LLM configuration rely on Integration Service to establish the connection to your own models. You can create connections to the following providers:
  • Azure Open AI
  • Open AI
  • Amazon Bedrock
  • Google Vertex
  • Open AI V1 Compliant LLM – Use this option to connect to any LLM provider whose API follows the OpenAI V1 standard. For details, refer to the OpenAI V1 Compliant LLM connector documentation.
Note: To use Integration Service connections, you must add the Integration Service outbound IP ranges to your allowlist.

To set up a new connection, follow these steps:

1. Create the Integration Service connection.
  1. In Integration Service, create a connection to your LLM provider.
  2. Choose the folder where the connection will be stored.
  3. Complete the authentication as required by the selected connector.
Note: The folder you choose controls both security and visibility. To prevent unauthorized access, create the Integration Service connection in a private, non-shared folder. However, note that model visibility is determined by access to this folder. If an admin does not have access to it, the associated model configuration will not appear in the list.
2. Add a new LLM configuration.
  1. Navigate to Admin > AI Trust Layer > LLM configurations.
  2. Select the Tenant.
  3. Select Add configuration.
  4. Choose the Product (for example, Agents) and Feature (for example, Design, Evaluate & Deploy).
  5. Select the Connections Folder.

3. Configure the model.

In the Model Configuration section, fill in the following fields:

  • LLM Name – This field supports two configuration options, depending on your use case:
    • Select a model from the list – Choosing a model from the predefined list replaces the UiPath-managed LLM subscription with your own subscription for that same model. This scenario is referred to as Bring Your Own Subscription (BYOS).
    • Add custom alias – Entering a custom name allows you to configure a model that is not included in the predefined list of recommended models for that product. This scenario is referred to as Bring Your Own Model (BYOM).
      Note: The Add a custom alias option is available only for products that support custom models. For example, it does not appear for Autopilot for Everyone, which supports only a limited set of predefined models.
  • API Type – The API endpoint supported by the LLM (for example, Open AI Chat Completions). This must match the endpoint exposed by your provider.
  • Connector – The Integration Service connector (for example, Microsoft Azure OpenAI, Amazon Bedrock).
  • Connection – The specific Integration Service connection created earlier. If no connection is available, create one in Integration Service.
  • LLM identifier – The model identifier exactly as it appears in your LLM subscription.
    • For Azure-hosted models: enter the model deployment name/identifier.
    • For AWS Bedrock cross-region inference: enter the inference profile ID.
    • For other providers: use the model name as defined in your subscription.
Note: When configuring your own LLM, you can optionally restrict which large language models are available for use in your organization. If you want to ensure that only your custom models are used, you can disable UiPath-managed third-party models by applying an AI Trust Layer policy. Check the Models section in the AI Trust Layer policies documentation.

4. Validate and save.

  1. Select Test configuration to verify that the endpoint is reachable.
    • The platform validates the connectivity.
    • Ensuring the correct model is configured remains your responsibility.
  2. If the validation is successful, select Save to activate the configuration.


Managing existing LLM connections

You can perform the following actions on your existing connections:

  • Check status – Verify the status of your Integration Service connection. This action ensures that the connection is active and functioning correctly.
  • Edit – Modify any parameters of your existing connection.
  • Disable – Temporarily suspend the connection. When disabled, the connection remains visible in your list but doesn't route any calls. You can re-enable the connection when needed.
  • Delete – Permanently remove the connection from your system. This action disables the connection and removes it from your list.

Configuring LLMs for your product

Each product supports specific large language models (LLMs) and versions. Use the table below to identify the supported models and versions for your product.

You can connect your own LLM using one of the following providers: Amazon Web Services, Google Vertex, Microsoft Azure OpenAI, or OpenAI V1 Compliant. Follow the steps outlined in the previous section to create a connection.

ProductFeatureLLMVersion
Agents1Design, Evaluate & DeployAnthropic

anthropic.claude-3.5-sonnet-20240620-v1:0

anthropic.claude-3.5-sonnet-20241022-v2:0

anthropic.claude-3.7-sonnet-20250219-v1:0

anthropic.claude-3-haiku-20240307-v1:0

OpenAI

gpt-4o-2024-05-13

gpt-4o-2024-08-06

gpt-4o-2024-11-20

gpt-4o-mini-2025-04-14

gpt-4o-mini-2024-07-18

Autopilot for everyoneChatAnthropic

anthropic.claude-3.5-sonnet-20240620-v1:0

anthropic.claude-3.7-sonnet-20250219-v1:0

OpenAIgpt-4o-mini-2024-07-18
Coded agentsCall LLMAnthropic

anthropic.claude-sonnet-4-20250514-v1:0

anthropic.claude-sonnet-4-5-20250929-v1:0

anthropic.claude-haiku-4-5-20251001-v1:0

Google

gemini-2.5-flash

gemini-2.5-pro

OpenAI

gpt-4.1-2025-04-14

gpt-4.1-mini-2025-04-14

gpt-5-2025-08-07

gpt-5-mini-2025-08-07

gpt-5.1-2025-11-13

GenAI ActivitiesBuild, Test & DeployAnthropic

anthropic.claude-3-5-sonnet-20240620-v1:0

anthropic.claude-3-5-sonnet-20241022-v2:0

anthropic.claude-3-7-sonnet-20250219-v1:0

anthropic.claude-sonnet-4-20250514-v1:0

anthropic.claude-sonnet-4-5-20250929-v1:0

anthropic.claude-haiku-4-5-20251001-v1:0

Google

gemini-2.0-flash-001

gemini-2.5-pro

gemini-2.5-flash

OpenAI

gpt-5-2025-08-07

gpt-5-mini-2025-08-07

gpt-5-nano-2025-08-07

gpt-5.1-2025-11-13

gpt-4o-2024-11-20

gpt-4o-mini-2024-07-18

Healing AgentWorkflow RecoveryGooglegemini-2.5-flash
OpenAIgpt-4o-2024-08-06
UI AutomationScreenPlayAnthropicanthropic.claude-sonnet-4-5-20250929-v1:0
Googlegemini-2.5-flash
OpenAI

gpt-4.1-mini-2025-04-14

gpt-4.1-2025-04-14

gpt-5-2025-08-07

gpt-5-mini-2025-08-07

computer-use-preview-2025-03-11

Semantic selectorsGooglegemini-2.5-flash
Test ManagerAutopilot
  • Autopilot Search
  • Find Obsolete Tests
  • Generate Test Cases
  • Import Test Cases
  • Generate Reports
  • Requirement Evaluation
Anthropicanthropic.claude-3.7-sonnet-20250219-v1:0 (to be replaced with anthropic.claude-4.5-sonnet in March 2026)
Google

gemini-2.5-pro

gemini-2.5-flash

OpenAIgpt-4o-2024-11-20
1 When configuring your model deployment for agents, ensure that your LLM supports the following capabilities:
  • Tool (function) calling – Your model must be able to call tools or functions during execution.
  • Disabling parallel tool calls – If supported by your LLM provider, the model should offer the option to disable parallel tool calls.
    Note: When using custom models, the system cannot determine the model’s true token capacity. Agents default to a 4096 token limit, even if the underlying model supports a higher value. This behavior is intentional, as UiPath cannot infer token limits for customer-defined deployments.

Was this page helpful?

Connect

Need help? Support

Want to learn? UiPath Academy

Have questions? UiPath Forum

Stay updated