automation-suite
2024.10
true
- Overview
- Requirements
- Pre-installation
- Preparing the installation
- Installing and configuring the service mesh
- Downloading the installation packages
- Configuring the OCI-compliant registry
- Granting installation permissions
- Installing and configuring the GitOps tool
- Deploying Redis through OperatorHub
- Applying miscellaneous configurations
- Running uipathctl
- Installation
- Post-installation
- Migration and upgrade
- Upgrading Automation Suite
- Migrating standalone products to Automation Suite
- Step 1: Restoring the standalone product database
- Step 2: Updating the schema of the restored product database
- Step 3: Moving the Identity organization data from standalone to Automation Suite
- Step 4: Backing up the platform database in Automation Suite
- Step 5: Merging organizations in Automation Suite
- Step 6: Updating the migrated product connection strings
- Step 7: Migrating standalone Orchestrator
- Step 8: Migrating standalone Insights
- Step 9: Deleting the default tenant
- Performing a single tenant migration
- Migrating between Automation Suite clusters
- Monitoring and alerting
- Cluster administration
- Product-specific configuration
- Troubleshooting

Automation Suite on OpenShift installation guide
Last updated Oct 8, 2025
For instructions on some common operations you might need to perform while installing or configuring Automation Suite, see the following:
When running Automation Suite prerequisites (
uipathctl prereq run) on an OpenShift cluster (versions 4.16–4.18), the cluster connectivity check can fail on certain nodes with the following
error:
time="2025-07-22T13:51:02Z" level=info msg="Wizard is running on :8080"
Error: listen failed with error: listen tcp :8080: bind: address already in usetime="2025-07-22T13:51:02Z" level=info msg="Wizard is running on :8080"
Error: listen failed with error: listen tcp :8080: bind: address already in use
The error typically occurs only on control-plane nodes.
To address this issue, you can use one of the following workaround options:
-
Run prereqs on worker nodes only.
To avoid scheduling prereq pods on control-plane nodes, you must apply node labels to target worker nodes only. You must updateinput.jsonto include thenode_labelssection, as shown in the following example:This ensures that all prerequisite checks, including connectivity, are executed only on worker nodes and not on control-plane nodes where the conflict occurs."node_labels": { "node-role.kubernetes.io/worker": "" }"node_labels": { "node-role.kubernetes.io/worker": "" } -
Skip the connectivity check.
If targeting specific node roles is not possible (when nodes are configured with multiple roles), you must run the prereq command excluding the connectivity check:
uipathctl prereq run --excluded CONNECTIVITYuipathctl prereq run --excluded CONNECTIVITY