- Overview
- Requirements
- Pre-installation
- Preparing the installation
- Installing and configuring the service mesh
- Downloading the installation packages
- Configuring the OCI-compliant registry
- Granting installation permissions
- Installing and configuring the GitOps tool
- Deploying Redis through OperatorHub
- Applying miscellaneous configurations
- Running uipathctl
- Installation
- Post-installation
- Migration and upgrade
- Upgrading Automation Suite
- Migrating standalone products to Automation Suite
- Step 1: Restoring the standalone product database
- Step 2: Updating the schema of the restored product database
- Step 3: Moving the Identity organization data from standalone to Automation Suite
- Step 4: Backing up the platform database in Automation Suite
- Step 5: Merging organizations in Automation Suite
- Step 6: Updating the migrated product connection strings
- Step 7: Migrating standalone Orchestrator
- Step 8: Migrating standalone Insights
- Step 9: Deleting the default tenant
- Performing a single tenant migration
- Migrating between Automation Suite clusters
- Monitoring and alerting
- Cluster administration
- Product-specific configuration
- Troubleshooting

Automation Suite on OpenShift installation guide
In addition to Microsoft SQL Server, the Automation Suite cluster requires a storage component to store the files. Automation Suite requires the objectstore and the block/file storage, depending on the service type you choose.
UiPath® platform services
The following services require the storage component. These are only necessary if you have opted to enable them as part of the Automation Suite installation or later.
|
Service |
Storage type |
Purpose |
Estimate |
|---|---|---|---|
|
Orchestrator |
Objectstore |
|
Typically, a package is 5 Mb, and buckets, if any, are less than 1 Mb. A mature enterprise deploys around 10 GB of packages and 12 GB of Queues. |
|
Action Center |
Objectstore |
|
Typically, a document takes 0.15 Mb, and the forms to fill take an additional 0.15 Kb. In a mature enterprise, this can total 4 GB. |
|
Test Manager |
Objectstore |
|
Typically, all files and attachments add up to approximately 5 GB. |
|
Insights |
Blockstore |
|
2 GB is required for enablement, with the storage footprint growing with the number. A well-established enterprise-scale deployment requires another few GB for all the dashboards. Approximately 10GB of storage should be sufficient. |
|
Apps |
Objectstore |
|
Typically, the database takes approximately 5 GB, and a typical complex app consumes about 15 Mb. |
|
AI Center |
Objectstore / Filestore |
|
A typical and established installation will consume 8 GB for five packages and an additional 1 GB for the datasets. A pipeline may consume an additional 50 GB of block storage, but only when actively running. |
|
Document Understanding |
Objectstore |
|
In a mature deployment, 12GB will go to the ML model, 17GB to the OCR, and 50GB to all documents stored. |
|
Automation Suite Robots |
Filestore |
|
Typically, a mature enterprise deploys around 10 GB of packages. |
|
Process Mining |
Objectstore |
|
The minimal footprint is only used to store the SQL files. Approximately a GB of storage should be enough in the beginning. |
Automation Suite supports the following objectstores:
-
Azure blob storage
-
AWS S3 storage
-
S3-compatible objectstore. OpenShift provides OpenShift Data Foundation, a Ceph-based, S3-compatible objectstore. To install OpenShift Data Foundation, see Introduction to OpenShift Data Foundation.
Configuring OpenShift Data Foundation
- To create an objectstore bucket on OpenShift Data Foundation (ODF), you must create an
ObjectBucketClaimfor each bucket, corresponding to each product that you plan to install.Important:The following sample shows a validWhen using ODF as the object store on OpenShift cluster versions earlier than 4.19, CORS cannot be configured. This limitation can prevent services from working correctly with ODF buckets. To ensure compatibility, set"disable_presigned_url": truein yourinput.jsonfile.If you encounter an error while applying this setting, refer to the Troubleshooting section.
ObjectBucketClaim:Note: The configuration we provide in the sample is required only if you create the buckets in OpenShift Data Foundation.apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: BUCKET_NAME namespace: <uipath> spec: bucketName: BUCKET_NAME storageClassName: openshift-storage.noobaa.ioapiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: BUCKET_NAME namespace: <uipath> spec: bucketName: BUCKET_NAME storageClassName: openshift-storage.noobaa.io -
Applying the manifest creates a secret named
BUCKET_NAMEin the<uipath>namespace. The secret contains theaccess_keyand thesecret_keyfor that bucket. To query theaccess_keyand thesecret_key, run the following command:oc get secret BUCKET_NAME -n <uipath> -o jsonpath={.data.AWS_ACCESS_KEY_ID} | base64 -d; echo oc get secret BUCKET_NAME -n <uipath> -o jsonpath={.data.AWS_SECRET_ACCESS_KEY} | base64 -d; echooc get secret BUCKET_NAME -n <uipath> -o jsonpath={.data.AWS_ACCESS_KEY_ID} | base64 -d; echo oc get secret BUCKET_NAME -n <uipath> -o jsonpath={.data.AWS_SECRET_ACCESS_KEY} | base64 -d; echo -
To find the host or FQDN to access the bucket, run the following command:
oc get routes s3 -o jsonpath={.spec.host} -n openshift-storage; echooc get routes s3 -o jsonpath={.spec.host} -n openshift-storage; echo
Configuring the CORS policy
Additionally, you may have to enable the following CORS policy at the storage account/bucket level if you face any CORS-related error during the S3 connection while using the Automation Suite cluster.
{{fqdn}} with the FQDN of the Automation Suite cluster in the following CORS policy.
The following sample shows the CORS policy in JSON format:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST",
"GET",
"HEAD",
"DELETE",
"PUT"
],
"AllowedOrigins": [
"https://{{fqdn}}"
],
"ExposeHeaders": [
"etag",
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2"
],
"MaxAgeSeconds": 3000
}
][
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST",
"GET",
"HEAD",
"DELETE",
"PUT"
],
"AllowedOrigins": [
"https://{{fqdn}}"
],
"ExposeHeaders": [
"etag",
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2"
],
"MaxAgeSeconds": 3000
}
]The following sample shows the CORS policy in XML format:
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>{{fqdn}}</AllowedOrigin>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<ExposeHeader>x-amz-server-side-encryption</ExposeHeader>
<ExposeHeader>x-amz-request-id</ExposeHeader>
<ExposeHeader>x-amz-id-2</ExposeHeader>
<ExposeHeader>etag</ExposeHeader>
</CORSRule>
</CORSConfiguration><CORSConfiguration>
<CORSRule>
<AllowedOrigin>{{fqdn}}</AllowedOrigin>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<ExposeHeader>x-amz-server-side-encryption</ExposeHeader>
<ExposeHeader>x-amz-request-id</ExposeHeader>
<ExposeHeader>x-amz-id-2</ExposeHeader>
<ExposeHeader>etag</ExposeHeader>
</CORSRule>
</CORSConfiguration>Configuration
To configure the objectstore, see External Objectstore Configuration.
make permissions. Alternatively, you can provision the required containers/buckets before installation and their information to
the installer.
Storage requirements
|
Storage |
Requirement |
|---|---|
|
Objectstore |
500 GB |
The size of the objectstore depends on the size of the deployed and running automation. Therefore, it can be challenging to provide an accurate objectstore estimate initially during the installation. You can start with an objectstore size of 350 GB to 500 GB. To understand the usage of the objectstore, see Storage estimate for each Automation Suite component.
-
As your automation scales, you may need to account for the increase in your objectstore size.
-
If you use buckets created in OpenShift Data Foundation, then you must explicitly provision the buckets and provide the details for each product in the
input.jsonfile.For more information on how to provide bucket information explicitly in theinput.jsonfile, see the Product-specific configuration section.
Block storage must have CSI drivers configured with the Kubernetes storage classes.
The following table provides details of the block storage, storage class, and provisioner:
|
Cloud / Kubernetes |
Storage |
StorageClass |
Provisioner |
|---|---|---|---|
|
AWS |
EBS Volumes |
|
|
|
Azure |
Azure Manage Disk |
Premium LRS Disk |
|
|
OpenShift |
OpenShift Data Foundation |
|
|
StorageClass that your storage vendor provides.
Configuration
You can follow the official guide from Red Hat to create a storage class in your OpenShift cluster.
storage_class parameter in the input.json file.
-
In OpenShift, CSI drivers are installed automatically and the storage class is created while installing the OpenShift Data Foundation. If these storage classes are not configured, you must configure them before the Automation Suite installation.
-
You must make the storage class for the block storage the default one, as shown in the following example.
Example
input.json file during installation:
|
Configuration | input.json | StorageClass |
|---|---|---|
|
Azure |
|
|
|
AWS |
|
|
|
OpenShift |
|
|
Storage requirements
|
Configuration |
Requirement |
|---|---|
|
Block storage |
50 GB |
The size of the block store depends on the size of the deployed and running automation. Therefore, it can be challenging to provide an accurate estimate initially during the installation. You can start with a block storage size of 50 GB. To understand the usage of the block store, see Storage estimate for each Automation Suite component.
File storage must have CSI drivers configured with the Kubernetes storage classes.
File storage is required for the components that do not require any replication. However, if you do not have a file system, you can replace file storage with block storage.
|
Cloud / Kubernetes |
Storage |
StorageClass |
Provisioner |
|---|---|---|---|
|
AWS |
EFS |
|
|
|
Azure |
Azure Files |
azurefile-csi-premium*
|
|
|
OpenShift |
OpenShift Data Foundation |
|
|
azurefile-csi-premium storage class for Studio Web on AKS.
* It is recommended to configure ZRS (or replication) for Studio Web storage to ensure high availability.
StorageClass that your storage vendor provides.
Configuration
You can follow the official guide from Red Hat to create a storage class in your OpenShift cluster.
storage_class_single_replica parameter in the input.json file.
In OpenShift, CSI drivers are installed automatically and the storage class is created while installing OpenShift Data Foundation. If the storage class is not configured, you must configure it before the Automation Suite installation.
Example
input.json during the installation:
|
Configuration |
|
|
|---|---|---|
|
Azure | | |
|
AWS | | Note:
Replace
$(EFS_ID) with the actual File Share ID you created while provisioning infrastructure.
|
|
OpenShift | | |
Storage class for the file share must have the required permissions set to 700 for the directory and files.
UID and GID must be set to 1000 in Azure, and gidRangeStart and gidRangeEnd to 1000 and 2000, respectively, in AWS.
Storage requirements
|
Storage |
Requirement |
|---|---|
|
File storage |
510 GB |
The size of the file store depends on the size of the deployed and running automation. Therefore, it can be challenging to provide an actual estimate initially, during the installation. However, you should expect approximately 510 GB of storage size to be enough to run ten concurrent training pipelines and for Automation Suite Robots. To understand the usage of the filestore, see Storage estimate for each Automation Suite component.
As your automation scales, you may need to account for an increase in the size of your file storage.