automation-suite
2024.10
true
UiPath logo, featuring letters U and I in white

Automation Suite on OpenShift installation guide

Last updated May 5, 2025

Configuring input.json

The input.json file allows you to configure the UiPath® products you want to deploy, the parameters, settings, and preferences applied to the selected products, and the settings of your cloud infrastructure. You must update this file to change the defaults and use any advanced configuration for your cluster.
Note:

Some products might have dependencies. For details, see Cross-product dependencies.

To edit input.json, you can use your favorite text editor on your client machine.
The following table describes the main input.json parameters you must update to properly configure Automation Suite. For a configuration example, refer to input.json example.

General parameters

Description

kubernetes_distribution

The Kubernetes distribution you use. For Automation Suite on OpenShift, the value is openshift.

install_type

Determines whether the cluster is deployed in online or offline mode. If not specified, the cluster is deployed in online mode. To deploy the cluster in offline mode, you must explicitly set the value of the install_type parameter to offline.
Possible values: online or offline
Default value: online

registries

URLs to pull the docker images and helm charts for UiPath® products and Automation Suite.

registry.uipath.com

istioMinProtocolVersion

The minimum version of the TLS protocol accepted by Istio for secure communications. It can be set to either TLSV1_2 or TLSV1_3.

fqdn

The Fully Qualified Domain Name to access the product. Make sure that Istio Ingress can access the FQDN.

admin_username

The username that you would like to set as an admin for the host organization.

admin_password

The host admin password to be set.

profile

Default value, not changeable

  • ha: multi-node HA-ready production profile.

telemetry_optout

true or false - used to opt out of sending telemetry back to UiPath®. It is set to false by default.
If you want to opt out, then set to true.

fips_enabled_nodes

Indicate whether you want to enable FIPS 140-2 on the nodes on which you plan to install Automation Suite. Possible values are true and false.
storage_class

Specify the storage class to be used for PV provisioning. This storage class must support multiple replicas for optimum High Availability and possess backup capabilities.

For more information, refer to Block storage section.

storage_class_single_replica

Provide the storage class to be used for PV provisioning. This storage class can have a single replica for the components that do not require High Availability. The storage class can have no backup capabilities.

For more information, refer to File storage section.

The storage_class_single_replica value can be the same as the storage_class value.
exclude_components

Use this parameter to prevent non-essential components from being installed.

For details, see Bring your own components.

namespace

Specify the namespace where you want to install Automation Suite.

argocd.application_namespace

The namespace of the application you plan to install. This should ideally be the same as the namespace where you plan to install Automation Suite.

argocd.project

The ArgoCD project required to deploy Automation Suite. This is only required if you want to use the shared or the global ArgoCD instance instead of a dedicated ArgoCD instance.

UiPath® products

You can enable and disable products in Automation Suite at the time of installation and at any point post-installation. For more details on each product configuration, see Managing products.

Orchestrator example:

"orchestrator": {
  "enabled": true,
  "external_object_storage": {
    "bucket_name": "uipath-as-orchestrator"
  },
  "testautomation": {
    "enabled": true
  },
  "updateserver": {
    "enabled": true
  }"orchestrator": {
  "enabled": true,
  "external_object_storage": {
    "bucket_name": "uipath-as-orchestrator"
  },
  "testautomation": {
    "enabled": true
  },
  "updateserver": {
    "enabled": true
  }

Bring your own components

Automation Suite allows you to bring your own Gatekeeper and OPA Policies, Cert Manager, Istio, monitoring, logging components, and more. If you choose to exclude these components, ensure that they are available in your cluster before installing Automation Suite.

The following sample shows a list of excluded components. You can remove the components you would like Automation Suite to provision.

"exclude_components": [
    "argocd",
    "monitoring",
    "istio",
    "logging",
    "gatekeeper",
    "network-policies",
    "velero",
    "alerts",
    "cert-manager",
    "dapr"
],"exclude_components": [
    "argocd",
    "monitoring",
    "istio",
    "logging",
    "gatekeeper",
    "network-policies",
    "velero",
    "alerts",
    "cert-manager",
    "dapr"
],

Excluding Istio

If you bring your own Istio component, make sure to include the gateway_selector labels from your Istio gateway in the input.json file. To find your gateway selector label, take the following steps:
  1. List all the pods in the <istio-system> namespace by running the oc get pods -n <istio-system> command.
  2. Find the one for your Istio gateway deployment.

Note: If you plan to install the WASM plugin yourself and want to avoid providing the Automation Suite installer with write access to the <istio-system> namespace, you must add the istio-configure component to the exclude_components list.

Excluding Cert Manager

If you choose to bring your own Cert Manager, and your TLS certificate is issued by a private or non-public CA, you must manually include both the leaf certificate and intermediate CA certificates in the TLS certificate file. In case of public CAs, they are automatically trusted by client systems, and no further action is required on your part.

Certificate configuration

If no certificate is provided at the time of installation, the installer creates self-issued certificates and configures them in the cluster.

Note:

The certificates can be created at the time of installation only if you grant the Automation Suite installer admin privileges during the installation. If you cannot grant the installer admin privileges, then you must create and manage the certificates yourself.

For details on how to obtain a certificate, see Managing the certificates.

Note:
Make sure to specify the absolute path for the certificate files. Run pwd to get the path of the directory where files are placed and append the certificate file name to the input.json.

Parameter

Description

server_certificate.ca_cert_file

Absolute path to the Certificate Authority (CA) certificate. This CA is the authority that signs the TLS certificate. A CA bundle must contain only the chain certificates used to sign the TLS certificate. The chain limit is nine certificates.

If you use a self-signed certificate, you must specify the path to rootCA.crt, which you previously created. Leave blank if you want the installer to generate ir.

server_certificate.tls_cert_file

Absolute path to the TLS certificate (server.crt is the self-signed certificate). Leave blank if you want the installer to generate it.
Note: If you provide the certificate yourself, the server.crt file must contain the entire chain, as shown in the following example:
-----server cert-----
-----root ca chain----------server cert-----
-----root ca chain-----

server_certificate.tls_key_file

Absolute path to the certificate key (server.key is the self-signed certificate). Leave blank if you want the installer to generate it.

identity_certificate.token_signing_cert_file

Absolute path to the identity token signing certificate used to sign tokens (identity.pfx is the self-signed certificate). Leave blank if you want the installer to generate an identity certificate using the server certificate.

identity_certificate.token_signing_cert_pass

Plain text password set when exporting the identity token signing certificate.

additional_ca_certs

Absolute path to the file containing the additional CA certificates that you want to be trusted by all the services running as part of Automation Suite. All certificates in the file must be in valid PEM format.

For example, you need to provide the file containing the SQL server CA certificate if the certificate is not issued by a public certificate authority.

Infrastructure prerequisites

You must provide configurations details of the prerequisites that you configured on Azure or AWS. For input.json parameter requirements, see the following prerequisite sections:

External Objectstore configuration

General configuration

Automation Suite allows you to bring your own external storage provider. You can choose from the following storage providers:

  • Azure
  • AWS
  • S3-compatible

You can configure the external object storage in one of the following ways:

  • during installation;
  • post-installation, using the input.json file.
Note:
  • For Automation Suite to function properly when using pre-signed URLs, you must make sure that your external objectstore is accessible from the Automation Suite cluster, browsers, and all your machines, including workstations and robot machines.
  • The Server Side Encryption with Key Management Service (SSE-KMS) can only be enabled on the Automation Suite buckets deployed in any region created after January 30, 2014.

    SSE-KMS functionality requires pure SignV4 APIs. Regions created before January 30, 2014 do not use pure SignV4 APIs due to backward compatibility with SignV2. Therefore, SSE-KMS is only functional in regions that use SignV4 for communication. To find out when the various regions were provisioned, refer to the AWS documentation.

If you use a Private Endpoint to access the container, you must add the fqdn parameter in the input.json file and specify the Private Endpoint as value.
The following table lists out the input.json parameters you can use to configure each provider of external object storage:

Parameter

Azure

AWS

S3-compatible

Description

external_object_storage.enabled

available

available

available

Specify whether you would like to bring your own object store. Possible values: true and false.

external_object_storage.create_bucket

available

available

available

Specify whether you would like to provision the bucket. Possible values: true and false.

external_object_storage.storage_type

available

available

available

Specify the storage provider you would like to configure. The value is case-sensitive. Possible values: azure and s3.
Note: Many S3 objectstores require the CORS set to all the traffic from the Automation Suite cluster. You must configure the CORS policy at the objectstore level to allow the FQDN of the cluster.

external_object_storage.fqdn

not available

available

available

Specify the FQDN of the S3 server. Required in the case of AWS instance and non-instance profile.

external_object_storage.port

not available

available

available

Specify the S3 port. Required in the case of AWS instance and non-instance profile.

external_object_storage.region

not available

available

available

Specify the AWS region where buckets are hosted. Required in the case of AWS instance and non-instance profile.

external_object_storage.access_key

not available

available

available

Specify the access key for the S3 account. Only required in the case of the AWS non-instance profile.

external_object_storage.secret_key

not available

available

available

Specify the secret key for the S3 account. Only required in the case of the AWS non-instance profile.

external_object_storage.use_instance_profile

not available

available

available

Specify whether you want to use an instance profile. An AWS Identity and Access Management (IAM) instance profile grants secure access to AWS resources for applications or services running on Amazon Elastic Compute Cloud (EC2) instances. If you opt for AWS S3, an instance profile allows an EC2 instance to interact with S3 buckets without the need for explicit AWS credentials (such as access keys) to be stored on the instance.

external_object_storage.bucket_name_prefix 1

not available

available

available

Indicate the prefix for the bucket names. Optional in the case of the AWS non-instance profile.

external_object_storage.bucket_name_suffix 2

not available

available

available

Indicate the suffix for the bucket names. Optional in the case of the AWS non-instance profile.

external_object_storage.account_key

available

not available

not available

Specify the Azure account key.

external_object_storage.account_name

available

not available

not available

Specify the Azure account name.

external_object_storage.azure_fqdn_suffix

available

not available

not available

Specify the Azure FQDN suffix. Optional parameter.

1 If you plan on disabling pre-signed URL access, note that this configuration is not supported by the following activities that upload or retrieve data from the objectstore:

2, 3 When configuring the external object storage, you must follow the naming rules and conventions from your provider for both bucket_name_prefix and bucket_name_suffix. In addition to that, the suffix and prefix must have a combined length of no more than 25 characters, and you must not end the prefix or start the suffix with a hyphen (-) as we already add the character for you automatically.

Product-specific configuration

You can use the parameters described in the General configuration section to update the general Automation Suite configuration. This means that all installed products would share the same configuration. If you want to configure one or more products differently, you can override the general configuration. You just need to specify the product(s) you want to set up external object storage for differently, and use the same parameters to define your configuration. Note that all the other installed products would continue to inherit the general configuration.

The following example shows how you can override the general configuration for Orchestrator:

"external_object_storage": {
  "enabled": false, // <true/false>
  "create_bucket": true, // <true/false>
  "storage_type": "s3", // <s3,azure,aws>
  "fqdn": "",  // <needed in the case of aws instance and non-instance profile>
  "port": 443, // <needed in the case of aws instance and non-instance profile>
  "region": "", 
  "access_key": "", // <needed in the case of aws non instance profile>
  "secret_key": "", // <needed in the case of aws non instance profile>
  "bucket_name_prefix": "",
  "bucket_name_suffix": "",
  "account_key": "",
  "account_name": "",
  "azure_fqdn_suffix": "core.windows.net",
},

"orchestrator": {
  "external_object_storage": {
    "enabled": false, // <true/false>
    "create_bucket": true, // <true/false>
    "storage_type": "s3", // <s3,azure>
    "fqdn": "",  // <needed in the case of aws instance and non-instance profile>
    "port": 443, // <needed in the case of aws instance and non-instance profile>
    "region": "", 
    "access_key": "", // <needed in case of aws non instance profile>
    "secret_key": "", // <needed in case of aws non instance profile>
    "bucket_name_prefix": "",
    "bucket_name_suffix": "",
    "account_key": "",
    "account_name": "",
    "azure_fqdn_suffix": "core.windows.net",
  }
}"external_object_storage": {
  "enabled": false, // <true/false>
  "create_bucket": true, // <true/false>
  "storage_type": "s3", // <s3,azure,aws>
  "fqdn": "",  // <needed in the case of aws instance and non-instance profile>
  "port": 443, // <needed in the case of aws instance and non-instance profile>
  "region": "", 
  "access_key": "", // <needed in the case of aws non instance profile>
  "secret_key": "", // <needed in the case of aws non instance profile>
  "bucket_name_prefix": "",
  "bucket_name_suffix": "",
  "account_key": "",
  "account_name": "",
  "azure_fqdn_suffix": "core.windows.net",
},

"orchestrator": {
  "external_object_storage": {
    "enabled": false, // <true/false>
    "create_bucket": true, // <true/false>
    "storage_type": "s3", // <s3,azure>
    "fqdn": "",  // <needed in the case of aws instance and non-instance profile>
    "port": 443, // <needed in the case of aws instance and non-instance profile>
    "region": "", 
    "access_key": "", // <needed in case of aws non instance profile>
    "secret_key": "", // <needed in case of aws non instance profile>
    "bucket_name_prefix": "",
    "bucket_name_suffix": "",
    "account_key": "",
    "account_name": "",
    "azure_fqdn_suffix": "core.windows.net",
  }
}

Rotating the blob storage credentials for Process Mining

To rotate the blob storage credentials for Process Mining in Automation Suite the stored secrets must be updated with the new credentials. See Rotating blob storage credentials.

Pre-signed URL configuration

You can use the disable_presigned_url flag to specify whether you would like to disable pre-signed URL access at global level. By default, pre-signed URLs are enabled for the entire platform. The possible values are: true and false.
{
  "disable_presigned_url" : true
}{
  "disable_presigned_url" : true
}
Note:
  • You can only change the default value of this parameter only for new installations. The operation is irreversible and does not apply to an existing cluster.

  • You can apply this configuration only to the entire platform. You cannot override the global configuration at product level.

External OCI-compliant registry configuration

To configure an external OCI-compliant registry, update the following parameters in the input.json file:

Keys

Value

registries.docker.url

Default value: registry.uipath.com

The URL or FQDN of the registry to be used by Automation Suite to host the container images.

registries.docker.username

registries.docker.password

Authentication information to be used for pulling the Docker images from the registry.

If one of the values is found in the input file, you must provide both of them when configuring the external registry.

registries.docker.pull_secret_value

The registry pull secret.

registries.helm.url

Default value: registry.uipath.com

The URL or FQDN of the registry to be used by Automation Suite to host the Helm chart of the service.

registries.helm.username

registries.helm.password

Authentication information to be used for pulling Helm charts from the registry.

If one of the values is found in the input file, you must provide both of them when configuring the external registry.

registry_ca_cert

The location of the CA file corresponding to the certificate configured for the registry.

If the registry is signed by a private certificate authority hosted on your premises, you must provide it to establish the trust.

Note:
You can use different methods to generate the encoded version of the pull_secret_value, including the one using Docker. For details, see .

The following configuration sample shows a typical OCI-compliant registry setup:

{
    "registries": {
        "docker": {
            "url": "registry.domain.io",
            "username": "username",
            "password": "password", 
            "pull_secret_value": "pull-secret-value"
        },
        "helm": {
            "url": "registry.domain.io",
            "username": "username",
            "password": "password"
        },
        "trust": {
            "enabled": true,
            "public_key": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFNE4vSzNzK2VXUTJHU3NnTTJNcUhsdEplVHlqRQp1UC9sd0dNTnNNUjhUZTI2Ui9TTlVqSVpIdnJKcEx3YmpDc0ZlZUI3L0xZaFFsQzlRdUU1WFhITDZ3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==",
            "detection_mode": false
        }
    },
    "registry_ca_cert": "/etc/pki/ca-trust/extracted/ca-bundle.trust.crt"
}{
    "registries": {
        "docker": {
            "url": "registry.domain.io",
            "username": "username",
            "password": "password", 
            "pull_secret_value": "pull-secret-value"
        },
        "helm": {
            "url": "registry.domain.io",
            "username": "username",
            "password": "password"
        },
        "trust": {
            "enabled": true,
            "public_key": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFNE4vSzNzK2VXUTJHU3NnTTJNcUhsdEplVHlqRQp1UC9sd0dNTnNNUjhUZTI2Ui9TTlVqSVpIdnJKcEx3YmpDc0ZlZUI3L0xZaFFsQzlRdUU1WFhITDZ3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==",
            "detection_mode": false
        }
    },
    "registry_ca_cert": "/etc/pki/ca-trust/extracted/ca-bundle.trust.crt"
}

Custom namespace configuration

You can specify a single custom namespace that replaces the default uipath, uipath-check, and uipath-installer namespaces. To define a custom namespace, provide a value for the optional namespace parameter. If you do not provide a value for the namespace parameter, the default namespaces are used instead.

Custom namespace label configuration

If you want the UiPath namespaces to contain custom namespace labels, add the following section to the input.json file. Make sure to add your own labels.
"namespace_labels": {
		"install-type": "aksoffline",
		"uipathctlversion": "rc-10_0.1",
		"updatedLabel": "rerun"
	}, "namespace_labels": {
		"install-type": "aksoffline",
		"uipathctlversion": "rc-10_0.1",
		"updatedLabel": "rerun"
	},

Custom node toleration configuration

If you need custom taints and tolerations on the nodes on which you plan to install Automation Suite, update input.json with the following flags. Make sure to provide the appropriate values to the spec field.
"tolerations": [
  {
    "spec": {
      "key": "example-key", 
      "operator": "Exists",
      "value": "optional-value",
      "effect": "NoSchedule"
    }
  },
  {
    "spec": {
      "key": "example-key2", 
      "operator": "Exists",
      "value": "optional-value2",
      "effect": "NoSchedule"
    }
  }
]"tolerations": [
  {
    "spec": {
      "key": "example-key", 
      "operator": "Exists",
      "value": "optional-value",
      "effect": "NoSchedule"
    }
  },
  {
    "spec": {
      "key": "example-key2", 
      "operator": "Exists",
      "value": "optional-value2",
      "effect": "NoSchedule"
    }
  }
]

Installing multiple Automation Suite instances on a single cluster

If you install multiple Automation Suite instances on the same cluster, you need to create a different input.json file for each installation and fill in the JSON values with different resources such has:
  • FQDN - Each installation must have a different FQDN.
  • Namespace - Each installation must have a different namespace. Provide the value of the namespace where you granted all the permissions for Automation Suite to be installed.
  • argocd.application_namespace - It must be the same as the namespace where Automation Suite has all the permissions to be installed.
  • argocd.project - Default.
  • fabric.redis - Fill in the details about the different installations of redis, whether they are located internally or externally to the OpenShift cluster.
  • SQL - Fill in the details about the different databases, whether they are located internally or externally to the OpenShift cluster.
  • sql_connection_string _template - Fill in the details about the different databases, whether they are located internally or externally to the OpenShift cluster.

These different resources can be the Redis service, the database, or the namespace. In Ingress Service Annotation, however, the load balancer IP is going to be a common IP pointing to the same OpenShift cluster.

Orchestrator-specific configuration

Orchestrator can save robot logs to an Elasticsearch server. You can configure this functionality in the orchestrator.orchestrator_robot_logs_elastic section. If not provided, robot logs are saved to Orchestrator's database.
The following table lists out the orchestrator.orchestrator_robot_logs_elastic parameters:

Parameter

Description

orchestrator_robot_logs_elastic

Elasticsearch configuration.

elastic_uri

The address of the Elasticsearch instance that should be used. It should be provided in the form of a URI. If provided, then username and password are also required.

elastic_auth_username

The Elasticsearch username, used for authentication.

elastic_auth_password

The Elasticsearch password, used for authentication.

Insights-specific configuration

If enabling Insights, users can include SMTP server configuration that will be used to send scheduled emails/alert emails. If not provided, scheduled emails and alert emails will not function.

The insights.smtp_configuration fields details:

Parameter

Description

tls_version

Valid values are TLSv1_2, TLSv1_1, SSLv23. Omit key altogether if not using TLS.

from_email

Address that alert/scheduled emails will be sent from.

host

Hostname of the SMTP server.

port

Port of the SMTP server.

username

Username for SMTP server authentication.

password

Password for SMTP server authentication.

enable_realtime_monitoringFlag to enable Insights Real-time monitoring. Valid values are true, false. Default value is false.

Example

"insights": {
    "enabled": true,
    "enable_realtime_monitoring": true,
    "smtp_configuration": {
      "tls_version": "TLSv1_2",
      "from_email": "[email protected]",
      "host": "smtp.sendgrid.com",
      "port": 587,
      "username": "login",
      "password": "password123"
    }
  }"insights": {
    "enabled": true,
    "enable_realtime_monitoring": true,
    "smtp_configuration": {
      "tls_version": "TLSv1_2",
      "from_email": "[email protected]",
      "host": "smtp.sendgrid.com",
      "port": 587,
      "username": "login",
      "password": "password123"
    }
  }

Process Mining-specific configuration

If enabling Process Mining, we recommend users to specify a SECONDARY SQL server to act as a data warehouse that is separate from the primary Automation Suite SQL Server. The data warehouse SQL Server will be under heavy load and can be configured in the processmining section:

Parameter

Description

sql_connection_str

DotNet formatted connection string with database set as a placeholder: Initial Catalog=DB_NAME_PLACEHOLDER.

airflow.metadata_db_connection_str

Note: This applies to PostgreSQL AutomationSuite Airflow database and is recommended for version 2024.10.3 or higher.
Sqlalchemy PSYCOPG2 formatted connection string for custom airflow metadata database location: PostgreSQL:5432/DB_NAME_PLACEHOLDER.

Example:

postgresql+psycopg2://testadmin:<password>@postgres.company.com:5432/DB_NAME_PLACEHOLDER

Note:
You can also use the metadata_db_connection_str in the processmining section at the global level in the cluster_config.json to provide the value for the Airflow metadatabase. In this case, the airflow.metadata_db_connection_str is optional.
"processmining": { 
    "enabled": true,
    "app_security_mode": "system_managed",
    "airflow": {
      "metadata_db_connection_str": "postgresql+psycopg2://testadmin:<password>@sfdev8454496-postgresql.postgres.database.azure.com:5432/AutomationSuite_Airflow"},"processmining": { 
    "enabled": true,
    "app_security_mode": "system_managed",
    "airflow": {
      "metadata_db_connection_str": "postgresql+psycopg2://testadmin:<password>@sfdev8454496-postgresql.postgres.database.azure.com:5432/AutomationSuite_Airflow"},

sqlalchemy_pyodbc_sql_connection_str

Note:
This is only applicable for Microsoft SQL Server AutomationSuite Airflow database and can no longer be used starting from version 2025.10.
Sqlalchemy PYODBC formatted connection string for custom airflow metadata database location: sqlServer:1433/DB_NAME_PLACEHOLDER.

Example:

mssql+pyodbc://testadmin%40myhost:mypassword@myhost:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES

where

user: testadmin%40myhost
Note:

If there is '@' in user name it has to be urlencoded to %40

Example: (SQL Server setup with Kerberos authentication)

mssql+pyodbc://:@assql2019.autosuitead.local:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES&Trusted_Connection=yes

warehouse.sql_connection_str

DotNet formatted SQL connection string to the processmining data warehouse SQL Server with placeholder for dbname:

Initial Catalog=DB_NAME_PLACEHOLDER.

warehouse.sqlalchemy_pyodbc_sql_connection_str

Sqlalchemy PYODBC formatted SQL connection string to the processmining data warehouse SQL Server with placeholder for dbname:

sqlServer:1433/DB_NAME_PLACEHOLDER.

warehouse.master_sql_connection_str

If the installer is creating databases through sql.create_db: true setting, a DotNet formatted master SQL connection string must be provided for the processmining data warehouse SQL Server. Database in the connection string must be set as master.
Sample Process Mining connection string with PostgreSQL for AutomationSuite_Airflow
"processmining": {
    "enabled": true,
    "app_security_mode": "system_managed",
    "airflow": {
      "metadata_db_connection_str": "postgresql+psycopg2://testadmin:<password>@sfdev8454496-postgresql.postgres.database.azure.com:5432/AutomationSuite_Airflow"
    },
    "warehouse": {
      "sql_connection_str": "Server=tcp:kerberossql.autosuitead.local,1433;Initial Catalog=AutomationSuite_Warehouse;Persist Security Info=False;User Id=testadmin;Password='<password>';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
      "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin:<password>@kerberossql.autosuitead.local:1433/AutomationSuite_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
      "master_sql_connection_str": "Server=tcp:kerberossql.autosuitead.local,1433;Initial Catalog=master;Persist Security Info=False;User Id=testadmin;Password='<password>';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;""processmining": {
    "enabled": true,
    "app_security_mode": "system_managed",
    "airflow": {
      "metadata_db_connection_str": "postgresql+psycopg2://testadmin:<password>@sfdev8454496-postgresql.postgres.database.azure.com:5432/AutomationSuite_Airflow"
    },
    "warehouse": {
      "sql_connection_str": "Server=tcp:kerberossql.autosuitead.local,1433;Initial Catalog=AutomationSuite_Warehouse;Persist Security Info=False;User Id=testadmin;Password='<password>';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
      "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin:<password>@kerberossql.autosuitead.local:1433/AutomationSuite_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
      "master_sql_connection_str": "Server=tcp:kerberossql.autosuitead.local,1433;Initial Catalog=master;Persist Security Info=False;User Id=testadmin;Password='<password>';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"

Sample Process Mining connection string with Microsoft SQL Server for AutomationSuite_Airflow
"processmining": {
    "enabled": true,
    "app_security_mode": "system_managed",
    "warehouse": {
        "sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;User [email protected];Password='password';Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
	    "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:<password>:@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
        "master_sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=master;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
    },
    "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:<password>@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES"
    "sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Metadata;User [email protected];Password='password';Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",  
},"processmining": {
    "enabled": true,
    "app_security_mode": "system_managed",
    "warehouse": {
        "sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;User [email protected];Password='password';Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
	    "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:<password>:@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
        "master_sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=master;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
    },
    "sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:<password>@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES"
    "sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Metadata;User [email protected];Password='password';Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",  
},
Note:
When using Kerberos authentication, utilize the Integrated Security and Trusted_Connection parameters. By setting Integrated Security to true and setting Trusted_Connection to yes, the credentials of the currently logged in user are used for the connection. In this case, you do not need to specify a separate username and password.
Important:
If you use Microsoft SQL Server for AutomationSuite_Airflow database, make sure you meet the following requirements.
  • Make sure that the timezone of the SQL Server machine where the Airflow database is installed, is set to UTC.

  • You must use the default server port 1433 for Airflow database connections. Non-standard SQL server ports are not supported.
Attention:

When configuring the connection strings for the Process Mining data warehouse SQL Server, the named instance of the SQL Server should be omitted.

Named instances of SQL Server cannot operate on the same TCP port. Therefore, the port number alone is sufficient to distinguish between instances.

For example, use tcp:server,1445 instead of tcp:server\namedinstance,1445.
Important:
Note that the names for template PYODBC connection string postgresql_connection_string_template_sqlalchemy_pyodbc (for PostgreSQL) respectively sql_connection_string_template_sqlalchemy_pyodbc (for MS SQL Server) and the PYODBC connection string sqlalchemy_pyodbc_sql_connection_str used when you bring your own database are different. Also connection string names are different for the template SQL sql_connection_string_template and sql_connection_str used when you bring your own database.
Important:
If you bring your own database and you configured this using the sql_connection_str and sqlalchemy_pyodbc_sql_connection_str or airflow.metadata_db_connection_str connection strings in the processmining section of the input.jsonfile, the template connection strings sql_connection_string_template and postgresql_connection_string_template_sqlalchemy_pyodbc (for PostgreSQL) respectively sql_connection_string_template_sqlalchemy_pyodbc (for MS SQL Server) are ignored if specified.

Automation Suite Robots-specific configuration

Automation Suite Robots can use package caching to optimize your process runs and allow them to run faster. NuGet packages are fetched from the filesystem instead of being downloaded from the Internet/network. This requires an additional space of minimum 10GB and should be allocated to a folder on the host machine filesystem of the dedicated nodes.

To enable package caching, you need to update the following input.json parameters:

Parameter

Default value

Description

packagecaching

true

When set to true, robots use a local cache for package resolution.

packagecachefolder

/uipath_asrobots_package_cache

The disk location on the serverless agent node where the packages are stored.

AI Center-specific configuration

For AI Center to function properly, you must configure the aicenter.external_object_storage.port and aicenter.external_object_storage.fqdn parameters in the input.json file.
Note: You must configure the parameters in the aicenter section of the input.json file even if you have configured the external_object_storage section of the file.
The following sample shows a valid input.json configuration for AI Center:
"aicenter": {
  "external_object_storage" {
    "port": 443,
    "fqdn": "s3.us-west-2.amazonaws.com"
  }
},
"external_object_storage": {
  "enabled": true,
  "create_bucket": false,
  "storage_type": "s3", 
  "region": "us-west-2", 
  "use_instance_profile": true
}
..."aicenter": {
  "external_object_storage" {
    "port": 443,
    "fqdn": "s3.us-west-2.amazonaws.com"
  }
},
"external_object_storage": {
  "enabled": true,
  "create_bucket": false,
  "storage_type": "s3", 
  "region": "us-west-2", 
  "use_instance_profile": true
}
...

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2025 UiPath. All rights reserved.