- Overview
 - Requirements
 - Deployment templates
 - Manual: Preparing the installation
- Manual: Preparing the installation
 - Step 2: Configuring the OCI-compliant registry for offline installations
 - Step 3: Configuring the external objectstore
 - Step 4: Configuring High Availability Add-on
 - Step 5: Configuring SQL databases
 - Step 6: Configuring the load balancer
 - Step 7: Configuring the DNS
 - Step 8: Configuring the disks
 - Step 9: Configuring kernel and OS level settings
 - Step 10: Configuring the node ports
 - Step 11: Applying miscellaneous settings
 - Step 12: Validating and installing the required RPM packages
 - Step 13: Generating cluster_config.json
 - Certificate configuration
 - Database configuration
 - External Objectstore configuration
 - Pre-signed URL configuration
 - Kerberos authentication configuration
 - External OCI-compliant registry configuration
 - Disaster recovery: Active/Passive and Active/Active configurations
 - High Availability Add-on configuration
 - Orchestrator-specific configuration
 - Insights-specific configuration
 - Process Mining-specific configuration
 - Document Understanding-specific configuration
 - Automation Suite Robots-specific configuration
 - Monitoring configuration
 - Optional: Configuring the proxy server
 - Optional: Enabling resilience to zonal failures in a multi-node HA-ready production cluster
 - Optional: Passing custom resolv.conf
 - Optional: Increasing fault tolerance
 - install-uipath.sh parameters
 - Adding a dedicated agent node with GPU support
 - Adding a dedicated agent Node for Task Mining
 - Connecting Task Mining application
 - Adding a Dedicated Agent Node for Automation Suite Robots
 
- Step 15: Configuring the temporary Docker registry for offline installations
 - Step 16: Validating the prerequisites for the installation
 
 - Manual: Performing the installation
 - Post-installation
 - Cluster administration
- Managing products
 - Getting Started with the Cluster Administration portal
 - Migrating objectstore from persistent volume to raw disks
 - Migrating from in-cluster to external High Availability Add-on
 - Migrating data between objectstores
 - Migrating in-cluster objectstore to external objectstore
 - Migrating to an external OCI-compliant registry
 - Switching to the secondary cluster manually in an Active/Passive setup
 - Disaster Recovery: Performing post-installation operations
 - Converting an existing installation to multi-site setup
 - Guidelines on upgrading an Active/Passive or Active/Active deployment
 - Guidelines on backing up and restoring an Active/Passive or Active/Active deployment
 - Redirecting traffic for the unsupported services to the primary cluster
 
- Scaling a single-node (evaluation) deployment to a multi-node (HA) deployment
 
 - Monitoring and alerting
 - Migration and upgrade
- Step 1: Moving the Identity organization data from standalone to Automation Suite
 - Step 2: Restoring the standalone product database
 - Step 3: Backing up the platform database in Automation Suite
 - Step 4: Merging organizations in Automation Suite
 - Step 5: Updating the migrated product connection strings
 - Step 6: Migrating standalone Orchestrator
 - Step 7: Migrating standalone Insights
 - Step 8: Deleting the default tenant
 
- Performing a single tenant migration
 
- Migrating from Automation Suite on Linux to Automation Suite on EKS/AKS
 - Upgrading Automation Suite
 - Downloading the installation packages and getting all the files on the first server node
 - Retrieving the latest applied configuration from the cluster
 - Updating the cluster configuration
 - Configuring the OCI-compliant registry for offline installations
 - Executing the upgrade
 - Performing post-upgrade operations
 
 - Product-specific configuration
- Using the Orchestrator Configurator Tool
 - Configuring Orchestrator parameters
 - Orchestrator appSettings
 - Configuring appSettings
 - Configuring the maximum request size
 - Overriding cluster-level storage configuration
 - Configuring credential stores
 - Configuring encryption key per tenant
 - Cleaning up the Orchestrator database
 
 - Best practices and maintenance
 - Troubleshooting
- How to troubleshoot services during installation
 - How to uninstall the cluster
 - How to clean up offline artifacts to improve disk space
 - How to clear Redis data
 - How to enable Istio logging
 - How to manually clean up logs
 - How to clean up old logs stored in the sf-logs bucket
 - How to disable streaming logs for AI Center
 - How to debug failed Automation Suite installations
 - How to delete images from the old installer after upgrade
 - How to disable TX checksum offloading
 - How to upgrade from Automation Suite 2022.10.10 and 2022.4.11 to 2023.10.2
 - How to manually set the ArgoCD log level to Info
 - How to expand AI Center storage
 - How to generate the encoded pull_secret_value for external registries
 - How to address weak ciphers in TLS 1.2
 - How to work with certificates
 - How to forward application logs to Splunk
 - How to clean up unused Docker images from registry pods
 - How to collect DU usage data with in-cluster objectstore (Ceph)
 - How to install RKE2 SELinux on air-gapped environments
 
- Unable to run an offline installation on RHEL 8.4 OS
 - Error in downloading the bundle
 - Offline installation fails because of missing binary
 - Certificate issue in offline installation
 - First installation fails during Longhorn setup
 - SQL connection string validation error
 - Prerequisite check for selinux iscsid module fails
 - Azure disk not marked as SSD
 - Failure after certificate update
 - Antivirus causes installation issues
 - Automation Suite not working after OS upgrade
 - Automation Suite requires backlog_wait_time to be set to 0
 - Volume unable to mount due to not being ready for workloads
 - Support bundle log collection failure
 - Test Automation SQL connection string is ignored
 
- Data loss when reinstalling or upgrading Insights following Automation Suite upgrade
 - Single-node upgrade fails at the fabric stage
 - Cluster unhealthy after automated upgrade from 2021.10
 - Upgrade fails due to unhealthy Ceph
 - RKE2 not getting started due to space issue
 - Volume unable to mount and remains in attach/detach loop state
 - Upgrade fails due to classic objects in the Orchestrator database
 - Ceph cluster found in a degraded state after side-by-side upgrade
 - Unhealthy Insights component causes the migration to fail
 - Service upgrade fails for Apps
 - In-place upgrade timeouts
 - Docker registry migration stuck in PVC deletion stage
 - AI Center provisioning failure after upgrading to 2023.10 or later
 - Upgrade fails in offline environments
 - SQL validation fails during upgrade
 - snapshot-controller-crds pod in CrashLoopBackOff state after upgrade
 - Longhorn REST API endpoint upgrade/reinstall error
 - Upgrade fails due to overridden Insights PVC sizes
 
- Setting a timeout interval for the management portals
 - Authentication not working after migration
 - Kinit: Cannot find KDC for realm <AD Domain> while getting initial credentials
 - Kinit: Keytab contains no suitable keys for *** while getting initial credentials
 - GSSAPI operation failed due to invalid status code
 - Alarm received for failed Kerberos-tgt-update job
 - SSPI provider: Server not found in Kerberos database
 - Login failed for AD user due to disabled account
 - ArgoCD login failed
 - Update the underlying directory connections
 
- Failure to get the sandbox image
 - Pods not showing in ArgoCD UI
 - Redis probe failure
 - RKE2 server fails to start
 - Secret not found in UiPath namespace
 - ArgoCD goes into progressing state after first installation
 - MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
 - Unhealthy services after cluster restore or rollback
 - Pods stuck in Init:0/X
 - Missing Ceph-rook metrics from monitoring dashboards
 - Pods cannot communicate with FQDN in a proxy environment
 - Failure to configure email alerts post upgrade
 - No healthy upstream issue
 
- Document Understanding not on the left rail of Automation Suite
 - Failed status when creating a data labeling session
 - Failed status when trying to deploy an ML skill
 - Migration job fails in ArgoCD
 - Handwriting recognition with intelligent form extractor not working
 - Failed ML skill deployment due to token expiry
 
- Running High Availability with Process Mining
 - Process Mining ingestion failed when logged in using Kerberos
 - After Disaster Recovery Dapr is not working properly for Process Mining
 - Configuring Dapr with Redis in cluster mode
 - Unable to connect to AutomationSuite_ProcessMining_Warehouse database using a pyodbc format connection string
 - Airflow installation fails with sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''
 - How to add an IP table rule to use SQL Server port 1433
 - Automation Suite certificate is not trusted from the server where CData Sync is running
 
- Running the diagnostics tool
 - Using the Automation Suite support bundle
 - Exploring Logs
 - Exploring summarized telemetry
 
 

Automation Suite on Linux installation guide
Orchestrator supports the following storage configurations:
- Cluster-level storage configuration;
 - Orchestrator-level storage configuration.
 
We recommend using the cluster-level configuration paired with external storage. This allows Orchestrator to automatically detect and use the settings, eliminating the need for additional configuration.
To configure the Orchestrator-level storage provider, update the Orchestrator parameters by taking the following steps. For more details, see Configuring the Orchestrator parameters.
- 
                     In the configuration file, under
orchestrator, add thelegacy_storage_providersection that contains the storage type and connection string, as shown in the following example:"orchestrator": { "enabled": true, "legacy_object_storage": { "type": "Azure", "connection_string": "DefaultEndpointsProtocol=https;AccountName=usr;AccountKey=...;EndpointSuffix=core.windows.net" } }"orchestrator": { "enabled": true, "legacy_object_storage": { "type": "Azure", "connection_string": "DefaultEndpointsProtocol=https;AccountName=usr;AccountKey=...;EndpointSuffix=core.windows.net" } } - Set the desired provider using the 
typeparameter. The possible values are:Azure,AWS, andMinio. A few considerations:- The 
storage.typeparameter values are case-sensitive; - FileSystem storage configuration is not supported.
 
 - The 
 - Add the connection string for the storage provider using the 
connection_stringparameter. For details on the connection strings format, see theStorage.Locationsection of UiPath.Orchestrator.dll.config. 
Orchestrator web browser access to Amazon and Azure storage buckets could be restricted due to the same-origin policy on the provider side. To successfully access the content of such a bucket, you must configure the respective provider to allow cross-origin requests from Orchestrator. For instructions, see CORP/CSP configuration.
To use network FileStore, edit the following Orchestrator ArgoCD app parameters:
storage.type = smbstorage.smb.domainstorage.smb.passwordstorage.smb.sourcestorage.smb.usernamestorage.smb.size
When Orchestrator uses cluster-level storage configuration, files can be uploaded to the storage using the Orchestrator Configurator Tool. To do that, use the following command:
./orchestrator_configurator.sh -s path./orchestrator_configurator.sh -s pathThe subfolders from the path will be uploaded to the configured storage.
--use-external-storage flag.