automation-suite
2024.10
true
UiPath logo, featuring letters U and I in white

Automation Suite on OpenShift installation guide

Last updated Oct 8, 2025

Storage

In addition to Microsoft SQL Server, the Automation Suite cluster requires a storage component to store the files. Automation Suite requires the objectstore and the block/file storage, depending on the service type you choose.

Storage estimate for each Automation Suite component

UiPath® platform services

The following services require the storage component. These are only necessary if you have opted to enable them as part of the Automation Suite installation or later.

Service

Storage type

Purpose

Estimate

Orchestrator

Objectstore

  • NuGet automation packages for deployed automation

  • Queues and their data

Typically, a package is 5 Mb, and buckets, if any, are less than 1 Mb. A mature enterprise deploys around 10 GB of packages and 12 GB of Queues.

Action Center

Objectstore

  • Documents stored by the user in document tasks

Typically, a document takes 0.15 Mb, and the forms to fill take an additional 0.15 Kb. In a mature enterprise, this can total 4 GB.

Test Manager

Objectstore

  • Attachments and screenshots stored by users

Typically, all files and attachments add up to approximately 5 GB.

Insights

Blockstore

  • Published dashboards and their metadata

2 GB is required for enablement, with the storage footprint growing with the number. A well-established enterprise-scale deployment requires another few GB for all the dashboards. Approximately 10GB of storage should be sufficient.

Apps

Objectstore

  • Attachments that are uploaded to Apps

Typically, the database takes approximately 5 GB, and a typical complex app consumes about 15 Mb.

AI Center

Objectstore / Filestore

  • ML Packages

  • Datasets for analysis

  • Training Pipelines

A typical and established installation will consume 8 GB for five packages and an additional 1 GB for the datasets.

A pipeline may consume an additional 50 GB of block storage, but only when actively running.

Document Understanding

Objectstore

  • ML model

  • OCR model

  • Stored documents

In a mature deployment, 12GB will go to the ML model, 17GB to the OCR, and 50GB to all documents stored.

Automation Suite Robots

Filestore

  • Caching the packages required to run an automation

Typically, a mature enterprise deploys around 10 GB of packages.

Process Mining

Objectstore

  • SQL files that are required to run queries in the SQL warehouse

The minimal footprint is only used to store the SQL files. Approximately a GB of storage should be enough in the beginning.

Objectstore

Automation Suite supports the following objectstores:

  • Azure blob storage

  • AWS S3 storage

  • S3-compatible objectstore. OpenShift provides OpenShift Data Foundation, a Ceph-based, S3-compatible objectstore. To install OpenShift Data Foundation, see Introduction to OpenShift Data Foundation.

Configuring OpenShift Data Foundation

  1. To create an objectstore bucket on OpenShift Data Foundation (ODF), you must create an ObjectBucketClaim for each bucket, corresponding to each product that you plan to install.
    Important:
    When using ODF as the object store on OpenShift cluster versions earlier than 4.19, CORS cannot be configured. This limitation can prevent services from working correctly with ODF buckets. To ensure compatibility, set "disable_presigned_url": true in your input.json file.

    If you encounter an error while applying this setting, refer to the Troubleshooting section.

    The following sample shows a valid ObjectBucketClaim:
    Note: The configuration we provide in the sample is required only if you create the buckets in OpenShift Data Foundation.
    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: BUCKET_NAME
      namespace: <uipath>
    spec:
      bucketName: BUCKET_NAME
      storageClassName: openshift-storage.noobaa.ioapiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: BUCKET_NAME
      namespace: <uipath>
    spec:
      bucketName: BUCKET_NAME
      storageClassName: openshift-storage.noobaa.io
  2. Applying the manifest creates a secret named BUCKET_NAME in the <uipath> namespace. The secret contains the access_key and the secret_key for that bucket. To query the access_key and the secret_key, run the following command:
    oc get secret BUCKET_NAME -n <uipath> -o jsonpath={.data.AWS_ACCESS_KEY_ID} | base64 -d; echo
    oc get secret BUCKET_NAME -n <uipath> -o jsonpath={.data.AWS_SECRET_ACCESS_KEY} | base64 -d; echooc get secret BUCKET_NAME -n <uipath> -o jsonpath={.data.AWS_ACCESS_KEY_ID} | base64 -d; echo
    oc get secret BUCKET_NAME -n <uipath> -o jsonpath={.data.AWS_SECRET_ACCESS_KEY} | base64 -d; echo
  3. To find the host or FQDN to access the bucket, run the following command:
    oc get routes s3 -o jsonpath={.spec.host} -n openshift-storage; echooc get routes s3 -o jsonpath={.spec.host} -n openshift-storage; echo
Important: ODF includes a NooBaa deployment with pre-defined CPU and memory limits. These limits can become a bottleneck for the Automation Suite and result in S3 request limit issues. To mitigate this risk, you must configure ODF with higher CPU and memory limits for the NooBaa deployment. For details, refer to the Troubleshooting section.

Configuring the CORS policy

Additionally, you may have to enable the following CORS policy at the storage account/bucket level if you face any CORS-related error during the S3 connection while using the Automation Suite cluster.

Make sure to replace {{fqdn}} with the FQDN of the Automation Suite cluster in the following CORS policy.

The following sample shows the CORS policy in JSON format:

JSON
[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "POST",
            "GET",
            "HEAD",
            "DELETE",
            "PUT"
        ],
        "AllowedOrigins": [
            "https://{{fqdn}}"
        ],
        "ExposeHeaders": [
            "etag",
            "x-amz-server-side-encryption",
            "x-amz-request-id",
            "x-amz-id-2"
        ],
        "MaxAgeSeconds": 3000
    }
][
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "POST",
            "GET",
            "HEAD",
            "DELETE",
            "PUT"
        ],
        "AllowedOrigins": [
            "https://{{fqdn}}"
        ],
        "ExposeHeaders": [
            "etag",
            "x-amz-server-side-encryption",
            "x-amz-request-id",
            "x-amz-id-2"
        ],
        "MaxAgeSeconds": 3000
    }
]

The following sample shows the CORS policy in XML format:

XML
<CORSConfiguration>
 <CORSRule>
   <AllowedOrigin>{{fqdn}}</AllowedOrigin>
   <AllowedMethod>HEAD</AllowedMethod>
   <AllowedMethod>GET</AllowedMethod>
   <AllowedMethod>PUT</AllowedMethod>
   <AllowedMethod>POST</AllowedMethod>
   <AllowedMethod>DELETE</AllowedMethod>
   <AllowedHeader>*</AllowedHeader>
  <MaxAgeSeconds>3000</MaxAgeSeconds>
  <ExposeHeader>x-amz-server-side-encryption</ExposeHeader>
  <ExposeHeader>x-amz-request-id</ExposeHeader>
  <ExposeHeader>x-amz-id-2</ExposeHeader>
  <ExposeHeader>etag</ExposeHeader>
 </CORSRule>
</CORSConfiguration><CORSConfiguration>
 <CORSRule>
   <AllowedOrigin>{{fqdn}}</AllowedOrigin>
   <AllowedMethod>HEAD</AllowedMethod>
   <AllowedMethod>GET</AllowedMethod>
   <AllowedMethod>PUT</AllowedMethod>
   <AllowedMethod>POST</AllowedMethod>
   <AllowedMethod>DELETE</AllowedMethod>
   <AllowedHeader>*</AllowedHeader>
  <MaxAgeSeconds>3000</MaxAgeSeconds>
  <ExposeHeader>x-amz-server-side-encryption</ExposeHeader>
  <ExposeHeader>x-amz-request-id</ExposeHeader>
  <ExposeHeader>x-amz-id-2</ExposeHeader>
  <ExposeHeader>etag</ExposeHeader>
 </CORSRule>
</CORSConfiguration>

Configuration

To configure the objectstore, see External Objectstore Configuration.

The Automation Suite installer supports creating the containers/buckets if you provide make permissions. Alternatively, you can provision the required containers/buckets before installation and their information to the installer.

Storage requirements

Storage

Requirement

Objectstore

500 GB

The size of the objectstore depends on the size of the deployed and running automation. Therefore, it can be challenging to provide an accurate objectstore estimate initially during the installation. You can start with an objectstore size of 350 GB to 500 GB. To understand the usage of the objectstore, see Storage estimate for each Automation Suite component.

Note:
  • As your automation scales, you may need to account for the increase in your objectstore size.

  • If you use buckets created in OpenShift Data Foundation, then you must explicitly provision the buckets and provide the details for each product in the input.json file.
    For more information on how to provide bucket information explicitly in the input.json file, see the Product-specific configuration section.

Block storage

Block storage must have CSI drivers configured with the Kubernetes storage classes.

The following table provides details of the block storage, storage class, and provisioner:

Cloud / Kubernetes

Storage

StorageClass

Provisioner

AWS

EBS Volumes

ebs-sc

ebs.csi.aws.com

Azure

Azure Manage Disk

managed-premium

Premium LRS Disk

disk.csi.azure.com

OpenShift

OpenShift Data Foundation

ocs-storagecluster-ceph-rbd

openshift-storage.rbd.csi.ceph.com

Note:
It is not mandatory that you use the storage solutions mentioned in this section. If you use a different storage solution, then you must use the corresponding StorageClass that your storage vendor provides.

Configuration

You can follow the official guide from Red Hat to create a storage class in your OpenShift cluster.

You must pass the name of the storage class you created for your cluster to the storage_class parameter in the input.json file.
Note:
  • In OpenShift, CSI drivers are installed automatically and the storage class is created while installing the OpenShift Data Foundation. If these storage classes are not configured, you must configure them before the Automation Suite installation.

  • You must make the storage class for the block storage the default one, as shown in the following example.

Example
The following example shows how to configure the storage class and how to provide it to input.json file during installation:

Configuration

input.jsonStorageClass

Azure

{
  "storage_class": "managed_premium"
}{
  "storage_class": "managed_premium"
}
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  creationTimestamp: "2023-06-15T09:34:17Z"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
    kubernetes.io/cluster-service: "true"
    storageclass.kubernetes.io/is-default-class: "true"
  name: managed-premium
parameters:
  cachingmode: ReadOnly
  kind: Managed
  storageaccounttype: Premium_LRS
provisioner: disk.csi.azure.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  creationTimestamp: "2023-06-15T09:34:17Z"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
    kubernetes.io/cluster-service: "true"
    storageclass.kubernetes.io/is-default-class: "true"
  name: managed-premium
parameters:
  cachingmode: ReadOnly
  kind: Managed
  storageaccounttype: Premium_LRS
provisioner: disk.csi.azure.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

AWS

{
  "storage_class": "ebs-sc"
}{
  "storage_class": "ebs-sc"
}
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-sc
  annotations:
   storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumerallowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-sc
  annotations:
   storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

OpenShift

{
  "storage_class": "ocs-storagecluster-ceph-rbd"
}{
  "storage_class": "ocs-storagecluster-ceph-rbd"
}
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ocs-storagecluster-ceph-rbd
  annotations:
    description: 'Provides RWO Filesystem volumes, and RWO and RWX Block volumes'
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: openshift-storage.rbd.csi.ceph.com
parameters:
  csi.storage.k8s.io/fstype: ext4
  csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
  pool: ocs-storagecluster-cephblockpool
  csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediatekind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ocs-storagecluster-ceph-rbd
  annotations:
    description: 'Provides RWO Filesystem volumes, and RWO and RWX Block volumes'
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: openshift-storage.rbd.csi.ceph.com
parameters:
  csi.storage.k8s.io/fstype: ext4
  csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
  pool: ocs-storagecluster-cephblockpool
  csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate

Storage requirements

Configuration

Requirement

Block storage

50 GB

The size of the block store depends on the size of the deployed and running automation. Therefore, it can be challenging to provide an accurate estimate initially during the installation. You can start with a block storage size of 50 GB. To understand the usage of the block store, see Storage estimate for each Automation Suite component.

Note: As your automation scales, you may need to account for the increase in your block storage size.

File storage

File storage must have CSI drivers configured with the Kubernetes storage classes.

File storage is required for the components that do not require any replication. However, if you do not have a file system, you can replace file storage with block storage.

Cloud / Kubernetes

Storage

StorageClass

Provisioner

AWS

EFS

efs-sc

efs.csi.aws.com

Azure

Azure Files

azurefile-csi-premium*

file.csi.azure.com

OpenShift

OpenShift Data Foundation

ocs-storagecluster-cephfs

openshift-storage.cephfs.csi.ceph.com

* Use the azurefile-csi-premium storage class for Studio Web on AKS.

* It is recommended to configure ZRS (or replication) for Studio Web storage to ensure high availability.

Note:
It is not mandatory that you use the storage solutions mentioned in this section. If you use a different storage solution, then you must use the corresponding StorageClass that your storage vendor provides.

Configuration

You can follow the official guide from Red Hat to create a storage class in your OpenShift cluster.

You must pass the name of the storage class you created for your cluster to the storage_class_single_replica parameter in the input.json file.
Note:

In OpenShift, CSI drivers are installed automatically and the storage class is created while installing OpenShift Data Foundation. If the storage class is not configured, you must configure it before the Automation Suite installation.

Example
The following example shows how to configure the storage class and how to provide it to input.json during the installation:

Configuration

input.json

StorageClass

Azure

{
  "storage_class_single_replica": "azurefile-csi-premium"
}{
  "storage_class_single_replica": "azurefile-csi-premium"
}
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
    kubernetes.io/cluster-service: "true"
  name: azurefile-csi
mountOptions:
- mfsymlinks
- actimeo=30
- nosharesock
- dir_mode=0700
- file_mode=0700
- uid=1000
- gid=1000
- nobrl
- cache=none
parameters:
  skuName: Standard_LRS
provisioner: file.csi.azure.com
reclaimPolicy: Delete
volumeBindingMode: ImmediateallowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
    kubernetes.io/cluster-service: "true"
  name: azurefile-csi
mountOptions:
- mfsymlinks
- actimeo=30
- nosharesock
- dir_mode=0700
- file_mode=0700
- uid=1000
- gid=1000
- nobrl
- cache=none
parameters:
  skuName: Standard_LRS
provisioner: file.csi.azure.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

AWS

{
  "storage_class_single_replica": "efs-sc"
}{
  "storage_class_single_replica": "efs-sc"
}
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: efs-sc
parameters:
  basePath: /dynamic_provisioning
  directoryPerms: "700"
  fileSystemId: $(EFS_ID)
  gidRangeEnd: "2000"
  gidRangeStart: "1000"
  provisioningMode: efs-ap
provisioner: efs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: ImmediateallowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: efs-sc
parameters:
  basePath: /dynamic_provisioning
  directoryPerms: "700"
  fileSystemId: $(EFS_ID)
  gidRangeEnd: "2000"
  gidRangeStart: "1000"
  provisioningMode: efs-ap
provisioner: efs.csi.aws.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
Note:
Replace $(EFS_ID) with the actual File Share ID you created while provisioning infrastructure.

OpenShift

{
  "storage_class_single_replica": "ocs-storagecluster-cephfs"
}{
  "storage_class_single_replica": "ocs-storagecluster-cephfs"
}
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ocs-storagecluster-cephfs
  annotations:
    description: Provides RWO and RWX Filesystem volumes
provisioner: openshift-storage.cephfs.csi.ceph.com
parameters:
  clusterID: openshift-storage
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
  csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
  fsName: ocs-storagecluster-cephfilesystem
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediatekind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ocs-storagecluster-cephfs
  annotations:
    description: Provides RWO and RWX Filesystem volumes
provisioner: openshift-storage.cephfs.csi.ceph.com
parameters:
  clusterID: openshift-storage
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node
  csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage
  fsName: ocs-storagecluster-cephfilesystem
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
Note:

Storage class for the file share must have the required permissions set to 700 for the directory and files.

Additionally, UID and GID must be set to 1000 in Azure, and gidRangeStart and gidRangeEnd to 1000 and 2000, respectively, in AWS.

Storage requirements

Storage

Requirement

File storage

510 GB

The size of the file store depends on the size of the deployed and running automation. Therefore, it can be challenging to provide an actual estimate initially, during the installation. However, you should expect approximately 510 GB of storage size to be enough to run ten concurrent training pipelines and for Automation Suite Robots. To understand the usage of the filestore, see Storage estimate for each Automation Suite component.

Note:

As your automation scales, you may need to account for an increase in the size of your file storage.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo
Trust and Security
© 2005-2025 UiPath. All rights reserved.