automation-cloud-dedicated
latest
false
UiPath logo, featuring letters U and I in white

Automation Cloud Dedicated admin guide

Last updated Feb 12, 2026

Managing AI Trust Layer

Checking usage summary

The Usage Summary tab on the AI Trust Layer page provides an overview of the model usage and restrictions across different regions. It represents the historical data from your audit log and reflects the settings of your governance policies.

You can view data displayed on the following criteria:

  • Total LLM Actions per Status: Enables you to monitor the status of different models across regions. To customize the data visualization, you can filter by region, model, status, and source.
  • Total LLM Actions per Product: Allows you to monitor the AI feature adoption within your organization. To customize the data visualization, you can filter by tenant and product.

Viewing audit logs

The Audit tab on the AI Trust Layer page offers a comprehensive view of AI-related operations, with details about requests and actions, the products and features initiating requests, as well as the used models and their location. You can monitor all AI-related operations and ensure their compliance with your established guidelines and policies. Audit logs also provide visibility into the inputs and outputs for Gen AI Activities, Agents, Autopilot for Everyone, and Document Understanding generative features. Note that you can view log entries created in the last 60 days.

The audit data is displayed as a table, with each of its columns providing a specific information about the AI-related operations:

  • Date (UTC): This displays the exact date and time,when each operation was requested. It allows for accurate tracking of requests according to their chronological order, facilitating timely audits.
  • Product: The specific product that initiated each operation. This visibility allows tracing any operation back to its originating product for enhanced understanding and accountability.
  • Feature: The specific product feature that initiated the operation, facilitating issue traceability to particular features, if any occurred.
  • Tenant: The specific tenant within your organization that initiated the operation. This insight enables a more detailed overview and helps recognize patterns or issues at the tenant level.
  • User: The individual user within the tenant who initiated the operation. It allows for tracing activities at a granular user level, enhancing the oversight capabilities. For GenAI Activities, the user is represented by the person who created the connection. An N/A value indicates a service-to-service communication where a user is not available.
  • Model Used: The specific AI model employed to process each operation. This insight provides a better understanding of which models are handling which types of requests.
  • Model Location: The location where the used model is hosted. This information can assist potential troubleshooting or audit requirements that could arise from model performance in specific locations.
  • Status: The status of each operation—showing if it was successful, failed, or blocked. This quick way of identifying operational issues is crucial for maintaining a smooth, efficient environment.

Additionally, the filtering capability allows you to narrow down your audit based on criteria such as the date, product, used model, status, or source. The Source filter allows you to choose between viewing all calls, only UiPath-managed calls, or exclusively custom connection calls (using customer-managed subscriptions, as defined in Configuring LLMs).

Furthermore, when you select an entry from the Audit table, you can access a Details section for a more in-depth review, which includes all data available in the Audit table, as well as the LLM call source and the exact deployment associated with the call.

Warning: Audit log input and output fields may appear temporarily empty when viewing recent entries. This is a known latency issue and data typically becomes available shortly after.

Handling PII and PHI data in audit logs

When using Gen AI features in UiPath, it's important to understand that audit logs may include Personally Identifiable Information (PII) or Protected Health Information (PHI). These details can appear in both the input prompts sent to Large Language Models (LLMs) and in the responses they generate. This applies to interactions executed through both attended and unattended automations.

The Details section of each audit log entry displays the full input and output content when prompts saving is enabled. This includes metadata such as:

  • Action ID
  • Tenant
  • Product
  • Feature
  • User
  • Timestamp
Hiding sensitive data for compliance

If your compliance rules require hiding PII and PHI data in audit logs, you can configure the AI Trust Layer policy to control visibility:

  1. Go to Automation Ops™ > Governance and select or create an AI Trust Layer policy.
  2. In the Feature Toggles tab, make sure to set the Enable prompts saving for Audit? option to No.

With this setting, neither prompts nor LLM outputs are retained in the audit logs. The log will display the entry metadata, but the input/output content will appear as “Blocked by policy.”

Note:

This configuration allows you to hide sensitive content from log entries, maintain compliance requirements, and control visibility of sensitive data while preserving audit capabilities. Once hidden, you cannot recover the prompts for further use.

Managing AI Trust Layer policies

The AI governance tab on the AI Trust Layer page allows you to manage third party AI models usage for your organizations, tenants, or user groups through AI Trust Layer policies. This helps you control user access to Generative AI features and ensures appropriate governance across your organization.

You get an overview of all active policies and their current statuses. In addition to this, you can view and manage policies and their deployments, as follows:

  • When you select the policy name, you are redirected to the respective AI Trust Layer policy in Automation Ops™ > Governance. You can now view the policy details and, if necessary, make changes. For details, refer to Settings for AI Trust Layer Policies. You can also directly create an AI Trust Layer policy by selecting Add policy.
  • When you select Manage deployments, you are redirected to Automation Ops™ > Governance > Deployment, where you can review all your policy deployments. For details, refer to Deploy Policies at Tenant Level.

Managing Autopilot for Everyone

This Autopilot for Everyone tab on the AI Trust Layer page allows you to manage Autopilot for Everyone usage across your organization.

You can perform the following actions:

Configuring LLMs

Note: LLM configurations is available on the following licensing plans:
  • Flex: Advanced Platform, Flex Standard Platform.

The LLM configurations tab allows you to integrate your existing AI subscriptions while maintaining the governance framework provided by UiPath. You can:

  • Replace UiPath LLM subscription: Replace UiPath-managed subscriptions with your own, provided they match the same model family and version already supported by the UiPath product. This allows for seamless swapping of UiPath-managed models with your subscribed models.
  • Add your own LLM: Use any LLM that meets the product's compatibility criteria. To ensure smooth integration, your chosen LLM must pass a series of tests initiated through a probe call before it can be used within the UiPath ecosystem.

Configuring LLMs preserves most of the governance benefits of the AI Trust Layer, including policy enforcement via Automation Ops and detailed audit logs. However, model governance policies are specifically designed for UiPath-managed LLMs. This means that if you disable a particular model through an AI Trust Layer policy, the restriction only applies to the UiPath-managed version of that model. Your own configured models of the same type remain unaffected.

When leveraging the option to use your own LLM or subscription, keep the following points in mind:

  • Compatibility requirements: Your chosen LLM or subscription must align with the model family and version currently supported by the UiPath product.
  • Setup: Make sure you properly configure and maintain all required LLMs in the custom setup. If any component is missing, outdated, or incorrectly configured, your custom setup may cease to function. In such cases, the system will automatically revert to a UiPath-managed LLM to ensure continuity of service, unless UiPath LLMs are turned off through an Automation Ops policy.

  • Cost-saving: If your custom LLM setup is complete, correct, and meets all necessary requirements, you may be eligible for a Reduced Consumption Rate.

Setting up an LLM connection

LLM connections rely on Integration Service to establish the connection to your own models. You can create connections to the following providers:

  • Azure Open AI
  • Open AI
  • Amazon Bedrock
  • Google Vertex
  • Open AI V1 Compliant LLM – Use this option to connect to any LLM provider whose API follows the OpenAI V1 standard. For details, refer to the OpenAI V1 Compliant LLM connector documentation.

To set up a new connection, follow these steps:

  1. Create a connection in Integration Service to your provider of choice. For connector-specific authentication details, see the Integration Service user guide.
    Note: To prevent unauthorized access, create the Integration Service connection in a private, non-shared folder.
  2. Navigate to Admin > AI Trust Layer > LLM Configurations.
  3. Select the tenant and folder where you want to configure the connection.
  4. Select Add configuration.
  5. Select the Product and Feature.
  6. Choose how you want to configure:
    • Replace UiPath LLM subscription – Use your own connection instead of a UiPath-managed one.

    • Add your own LLM – Add an additional LLM configuration managed entirely by you.
      Note: When configuring your own LLM, you can optionally restrict which large language models are available for use in your organization. If you want to ensure that only your custom models are used, you can disable UiPath-managed third-party models by applying an AI Trust Layer policy. Check the Models section in the AI Trust Layer policies documentation.

    Depending on the selected product, only one option may be available.

  7. Set up the connection for Replace UiPath LLM subscription:
    1. Folder – Select the folder where the configuration will be stored.
    2. Replaceable LLM – From the dropdown, select the UiPath LLM you want to replace.
    3. Connector – Select your connector (e.g., Microsoft Azure OpenAI).
    4. Connection – Choose your Integration Service connection. If none are available, select Add new connection to be redirected to Integration Service.
    5. LLM identifier – Enter the identifier for your model.
      • For Azure-hosted models, enter the model identifier.
      • For AWS Bedrock cross-region inference, enter the inference profile ID instead of the model ID.
  8. Set up the connection for Add your own LLM:
    1. Folder – Select the folder where the configuration will be stored.
    2. Displayed (LLM) name – Provide an alias for your LLM.
    3. Connector – Select your connector (e.g., Microsoft Azure OpenAI).
    4. Connection – Choose your Integration Service connection.
    5. LLM identifier – Enter the identifier for your model.
      • For Azure-hosted models, enter the model identifier.
      • For AWS Bedrock cross-region inference, enter the inference profile ID instead of the model ID.
  9. Select Test configuration to check that the model is reachable and meets the required criteria.
    UiPath can confirm reachability, verifying the exact model used is your responsibility.
  10. If the test is successful, select Save to activate the connection.

Managing existing LLM connections

You can perform the following actions on your existing connections:

  • Check status – Verify the status of your Integration Service connection. This action ensures that the connection is active and functioning correctly.
  • Edit – Modify any parameters of your existing connection.
  • Disable – Temporarily suspend the connection. When disabled, the connection remains visible in your list but doesn't route any calls. You can re-enable the connection when needed.
  • Delete – Permanently remove the connection from your system. This action disables the connection and removes it from your list.

Configuring LLMs for your product

Each product supports specific large language models (LLMs) and versions. Use the table below to identify the supported models and versions for your product.

You can connect your own LLM using one of the following providers: Amazon Web Services, Google Vertex, Microsoft Azure OpenAI, or OpenAI V1 Compliant. Follow the steps outlined in the previous section to create a connection.

ProductFeatureLLM providerVersion
Autopilot for everyoneChatAnthropic

anthropic.claude-3.5-sonnet-20240620-v1:0

anthropic.claude-3.7-sonnet-20250219-v1:0

OpenAIgpt-4o-mini-2024-07-18
Coded agentsCall LLMAnthropic

anthropic.claude-3.5-sonnet-20240620-v1:0

anthropic.claude-3.5-sonnet-20241022-v2:0

anthropic.claude-3.7-sonnet-20250219-v1:0

anthropic.claude-3-haiku-20240307-v1:0

Gemini

gemini-1.5-pro-001

gemini-2.0-flash-001

OpenAI

gpt-4o-2024-05-13

gpt-4o-2024-08-06

gpt-4o-2024-11-20

gpt-4o-mini-2024-07-18

o3-mini-2025-01-31

Context GroundingAdvanced ExtractionsGeminigemini-2.5-flash
EmbeddingsOpenAItext-embedding-3-large
Geminigemini-embedding-001
GenAI ActivitiesBuild, Test & DeployAnthropic

anthropic.claude-3.5-sonnet-20241022-v2:0

anthropic.claude-3.7-sonnet-20250219-v1:0

Gemini

gemini-2.5-pro

gemini-2.5-flash

OpenAI

gpt-5-2025-08-07

gpt-5-mini-2025-08-07

gpt-5-nano-2025-0807

Was this page helpful?

Connect

Need help? Support

Want to learn? UiPath Academy

Have questions? UiPath Forum

Stay updated