- Getting started
- Data security and compliance
- Organizations
- Authentication and security
- Licensing
- Tenants and services
- Accounts and roles
- External applications
- Testing in your organization
- AI Trust Layer
- Notifications
- Logging
- Troubleshooting
- Migrating to Automation Cloud Dedicated

Automation Cloud Dedicated admin guide
The Audit tab on the AI Trust Layer page offers a comprehensive view of AI-related operations, with details about requests and actions, the products and features initiating requests, as well as the used models and their location. You can monitor all AI-related operations and ensure their compliance with your established guidelines and policies. Audit logs also provide visibility into the inputs and outputs for Gen AI Activities, Agents, Autopilot for Everyone, and Document Understanding generative features. Note that you can view log entries created in the last 60 days.
The audit data is displayed as a table, with each of its columns providing a specific information about the AI-related operations:
- Date (UTC): This displays the exact date and time,when each operation was requested. It allows for accurate tracking of requests according to their chronological order, facilitating timely audits.
- Product: The specific product that initiated each operation. This visibility allows tracing any operation back to its originating product for enhanced understanding and accountability.
- Feature: The specific product feature that initiated the operation, facilitating issue traceability to particular features, if any occurred.
- Tenant: The specific tenant within your organization that initiated the operation. This insight enables a more detailed overview and helps recognize patterns or issues at the tenant level.
- User: The individual user within the tenant who initiated the operation. It allows for tracing activities at a granular user level, enhancing the oversight capabilities. For GenAI Activities, the user is represented by the person who created the connection. An N/A value indicates a service-to-service communication where a user is not available.
- Model Used: The specific AI model employed to process each operation. This insight provides a better understanding of which models are handling which types of requests.
- Model Location: The location where the used model is hosted. This information can assist potential troubleshooting or audit requirements that could arise from model performance in specific locations.
- Status: The status of each operation—showing if it was successful, failed, or blocked. This quick way of identifying operational issues is crucial for maintaining a smooth, efficient environment.
Additionally, the filtering capability allows you to narrow down your audit based on criteria such as the date, product, used model, status, or source. The Source filter allows you to choose between viewing all calls, only UiPath-managed calls, or exclusively custom connection calls (using customer-managed subscriptions, as defined in Configuring LLMs).
Furthermore, when you select an entry from the Audit table, you can access a Details section for a more in-depth review, which includes all data available in the Audit table, as well as the LLM call source and the exact deployment associated with the call.
When using Gen AI features in UiPath, it's important to understand that audit logs may include Personally Identifiable Information (PII) or Protected Health Information (PHI). These details can appear in both the input prompts sent to Large Language Models (LLMs) and in the responses they generate. This applies to interactions executed through both attended and unattended automations.
The Details section of each audit log entry displays the full input and output content when prompts saving is enabled. This includes metadata such as:
- Action ID
- Tenant
- Product
- Feature
- User
- Timestamp
Hiding sensitive data for compliance
If your compliance rules require hiding PII and PHI data in audit logs, you can configure the AI Trust Layer policy to control visibility:
- Go to Automation Ops™ > Governance and select or create an AI Trust Layer policy.
- In the Feature Toggles tab, make sure to set the Enable prompts saving for Audit? option to No.
With this setting, neither prompts nor LLM outputs are retained in the audit logs. The log will display the entry metadata, but the input/output content will appear as “Blocked by policy.”
This configuration allows you to hide sensitive content from log entries, maintain compliance requirements, and control visibility of sensitive data while preserving audit capabilities. Once hidden, you cannot recover the prompts for further use.