- Overview
- Requirements
- Pre-installation
- Preparing the installation
- Installing and configuring the service mesh
- Downloading the installation packages
- Configuring the OCI-compliant registry
- Granting installation permissions
- Installing and configuring the GitOps tool
- Deploying Redis through OperatorHub
- Applying miscellaneous configurations
- Running uipathctl
- Installation
- Post-installation
- Migration and upgrade
- Upgrading Automation Suite
- Migrating standalone products to Automation Suite
- Step 1: Restoring the standalone product database
- Step 2: Updating the schema of the restored product database
- Step 3: Moving the Identity organization data from standalone to Automation Suite
- Step 4: Backing up the platform database in Automation Suite
- Step 5: Merging organizations in Automation Suite
- Step 6: Updating the migrated product connection strings
- Step 7: Migrating standalone Orchestrator
- Step 8: Migrating standalone Insights
- Step 9: Deleting the default tenant
- Performing a single tenant migration
- Migrating between Automation Suite clusters
- Cluster administration
- Product-specific configuration
- Troubleshooting

Automation Suite on OpenShift installation guide
Configuring input.json
input.json
file allows you to configure the UiPath® products you
want to deploy, the parameters, settings, and preferences applied to the selected
products, and the settings of your cloud infrastructure. You must update this file to
change the defaults and use any advanced configuration for your cluster.
Some products might have dependencies. For details, see Cross-product dependencies.
input.json
, you can use your favorite text editor on your client
machine.
input.json
parameters you must update to properly
configure Automation Suite. For a configuration example, refer to input.json example.
General parameters |
Description |
---|---|
|
The Kubernetes distribution you use. For Automation Suite on
OpenShift, the value is
openshift .
|
|
Determines whether the cluster is deployed in online or
offline mode. If not specified, the cluster is deployed in
online mode. To deploy the cluster in offline mode, you must
explicitly set the value of the
install_type
parameter to offline .
Possible values:
online or
offline Default value:
online |
|
URLs to pull the docker images and helm charts for UiPath® products and Automation Suite.
|
|
The
minimum version of the TLS protocol accepted by Istio for secure
communications. It can be set to either
TLSV1_2
or TLSV1_3 .
|
|
The Fully Qualified Domain Name to access the product. Make sure that Istio Ingress can access the FQDN. |
|
The username that you would like to set as an admin for the host organization. |
|
The host admin password to be set. |
profile |
Default value, not changeable
|
|
true or false - used
to opt out of sending telemetry back to UiPath®. It is set to
false by default.
If you want to opt
out, then set to
true .
|
|
Indicate
whether you want to enable FIPS 140-2 on the nodes on which you
plan to install Automation Suite. Possible values are
true and
false .
|
storage_class |
Specify the storage class to be used for PV provisioning. This storage class must support multiple replicas for optimum High Availability and possess backup capabilities. For more information, refer to Block storage section. |
|
Provide the storage class to be used for PV provisioning. This storage class can have a single replica for the components that do not require High Availability. The storage class can have no backup capabilities. For more information, refer to File storage section. The
storage_class_single_replica value can be
the same as the storage_class
value.
|
exclude_components |
Use this parameter to prevent non-essential components from being installed. For details, see Bring your own components. |
namespace |
Specify the namespace where you want to install Automation Suite. For details, see Custom namespace configuration. |
argocd.application_namespace |
The namespace of the application you plan to install. This should ideally be the same as the namespace where you plan to install Automation Suite. |
|
The ArgoCD project required to deploy Automation Suite. This is only required if you want to use the shared or the global ArgoCD instance instead of a dedicated ArgoCD instance. |
You can enable and disable products in Automation Suite at the time of installation and at any point post-installation. For more details on each product configuration, see Managing products.
Orchestrator example:
"orchestrator": {
"enabled": true,
"external_object_storage": {
"bucket_name": "uipath-as-orchestrator"
},
"testautomation": {
"enabled": true
},
"updateserver": {
"enabled": true
}
"orchestrator": {
"enabled": true,
"external_object_storage": {
"bucket_name": "uipath-as-orchestrator"
},
"testautomation": {
"enabled": true
},
"updateserver": {
"enabled": true
}
Automation Suite allows you to bring your own Gatekeeper and OPA Policies, Cert Manager, Istio, monitoring, logging components, and more. If you choose to exclude these components, ensure that they are available in your cluster before installing Automation Suite.
-
For the list of optional components and responsibility matrix, see Automation Suite stack.
-
Make sure to review the compatibility matrix for versions that are validated with Automation Suite.
The following sample shows a list of excluded components. You can remove the components you would like Automation Suite to provision.
"exclude_components": [
"argocd",
"monitoring",
"istio",
"logging",
"gatekeeper",
"network-policies",
"velero",
"alerts",
"cert-manager",
"dapr"
],
"exclude_components": [
"argocd",
"monitoring",
"istio",
"logging",
"gatekeeper",
"network-policies",
"velero",
"alerts",
"cert-manager",
"dapr"
],
gateway_selector
labels from your Istio gateway in the input.json
file. To find your gateway selector label, take the following steps:
-
List all the pods in the
<istio-system>
namespace by running theoc get pods -n <istio-system>
command. -
Find the one for your Istio gateway deployment.
<istio-system>
namespace, you must add the istio-configure
component to the exclude_components
list.
If no certificate is provided at the time of installation, the installer creates self-issued certificates and configures them in the cluster.
The certificates can be created at the time of installation only if you grant the Automation Suite installer admin privileges during the installation. If you cannot grant the installer admin privileges, then you must create and manage the certificates yourself.
For details on how to obtain a certificate, see Managing the certificates.
pwd
to get the path of the directory where files are placed and append the certificate file name to the input.json
.
Parameter |
Description |
---|---|
|
Absolute path to the Certificate Authority (CA) certificate. This CA is the authority that signs the TLS certificate. A CA bundle must contain only the chain certificates used to sign the TLS certificate. The chain limit is nine certificates. If you use a self-signed certificate, you must specify the path to
rootCA.crt , which you previously created. Leave blank if you want the installer to generate ir.
|
|
Absolute path to the TLS certificate (
server.crt is the self-signed certificate). Leave blank if you want the installer to generate it.
Note: If you provide the certificate yourself,
the
server.crt file must contain the entire
chain, as shown in the following
example:
|
|
Absolute path to the certificate key (
server.key is the self-signed certificate). Leave blank if you want the installer to generate it.
|
|
Absolute path to the identity token signing certificate used to sign tokens (
identity.pfx is the self-signed certificate). Leave blank if you want the installer to generate an identity certificate using the server
certificate.
|
|
Plain text password set when exporting the identity token signing certificate. |
|
Absolute path to the file containing the additional CA certificates that you want to be trusted by all the services running
as part of Automation Suite. All certificates in the file must be in valid
PEM format.
For example, you need to provide the file containing the SQL server CA certificate if the certificate is not issued by a public certificate authority. |
input.json
parameter requirements, see the following prerequisite sections:
-
SQL database (mandatory)
-
Caching (mandatory)
-
Storage (mandatory)
-
Proxy configuration (optional)
Automation Suite allows you to bring your own external storage provider. You can choose from the following storage providers:
- Azure
- AWS
- S3-compatible
You can configure the external object storage in one of the following ways:
- during installation;
- post-installation, using the
file.input.json
- For Automation Suite to function properly when using pre-signed URLs, you must make sure that your external objectstore is accessible from the Automation Suite cluster, browsers, and all your machines, including workstations and robot machines.
-
The Server Side Encryption with Key Management Service (SSE-KMS) can only be enabled on the Automation Suite buckets deployed in any region created after January 30, 2014.
SSE-KMS functionality requires pure SignV4 APIs. Regions created before January 30, 2014 do not use pure SignV4 APIs due to backward compatibility with SignV2. Therefore, SSE-KMS is only functional in regions that use SignV4 for communication. To find out when the various regions were provisioned, refer to the AWS documentation.
fqdn
parameter in
the input.json
file and specify the
Private Endpoint as value.
input.json
parameters you can use to configure each provider of
external object storage:
Parameter |
Azure |
AWS |
S3-compatible |
Description |
---|---|---|---|---|
|
|
|
|
Specify whether you would like to bring your own
object store. Possible values:
true and
false .
|
|
|
|
|
Specify whether you would like to provision the
bucket. Possible values:
true and
false .
|
|
|
|
|
Specify the storage provider you would like to
configure. The value is case-sensitive. Possible values:
azure and s3 .
Note: Many S3
objectstores require the CORS set to all the traffic from the
Automation Suite cluster. You must configure the CORS policy at the
objectstore level to allow the FQDN of the cluster.
|
|
|
|
|
Specify the FQDN of the S3 server. Required in the case of AWS instance and non-instance profile. |
|
|
|
|
Specify the S3 port. Required in the case of AWS instance and non-instance profile. |
|
|
|
|
Specify the AWS region where buckets are hosted. Required in the case of AWS instance and non-instance profile. |
|
|
|
|
Specify the access key for the S3 account. Only required in the case of the AWS non-instance profile. |
|
|
|
|
Specify the secret key for the S3 account. Only required in the case of the AWS non-instance profile. |
|
|
|
|
Specify whether you want to use an instance profile. An AWS Identity and Access Management (IAM) instance profile grants secure access to AWS resources for applications or services running on Amazon Elastic Compute Cloud (EC2) instances. If you opt for AWS S3, an instance profile allows an EC2 instance to interact with S3 buckets without the need for explicit AWS credentials (such as access keys) to be stored on the instance. |
external_object_storage.bucket_name_prefix
1 |
|
|
|
Indicate the prefix for the bucket names. Optional in the case of the AWS non-instance profile. |
external_object_storage.bucket_name_suffix
2 |
|
|
|
Indicate the suffix for the bucket names. Optional in the case of the AWS non-instance profile. |
|
|
|
|
Specify the Azure account key. |
|
|
|
|
Specify the Azure account name. |
|
|
|
|
Specify the Azure FQDN suffix. Optional parameter. |
1 If you plan on disabling pre-signed URL access, note that this configuration is not supported by the following activities that upload or retrieve data from the objectstore:
bucket_name_prefix
and bucket_name_suffix
. In addition to that, the suffix and prefix must
have a combined length of no more than 25 characters, and you must not end the prefix or
start the suffix with a hyphen (-
) as we already add the character for
you automatically.
You can use the parameters described in the General configuration section to update the general Automation Suite configuration. This means that all installed products would share the same configuration. If you want to configure one or more products differently, you can override the general configuration. You just need to specify the product(s) you want to set up external object storage for differently, and use the same parameters to define your configuration. Note that all the other installed products would continue to inherit the general configuration.
The following example shows how you can override the general configuration for Orchestrator:
"external_object_storage": {
"enabled": false, // <true/false>
"create_bucket": true, // <true/false>
"storage_type": "s3", // <s3,azure,aws>
"fqdn": "", // <needed in the case of aws instance and non-instance profile>
"port": 443, // <needed in the case of aws instance and non-instance profile>
"region": "",
"access_key": "", // <needed in the case of aws non instance profile>
"secret_key": "", // <needed in the case of aws non instance profile>
"bucket_name_prefix": "",
"bucket_name_suffix": "",
"account_key": "",
"account_name": "",
"azure_fqdn_suffix": "core.windows.net",
},
"orchestrator": {
"external_object_storage": {
"enabled": false, // <true/false>
"create_bucket": true, // <true/false>
"storage_type": "s3", // <s3,azure>
"fqdn": "", // <needed in the case of aws instance and non-instance profile>
"port": 443, // <needed in the case of aws instance and non-instance profile>
"region": "",
"access_key": "", // <needed in case of aws non instance profile>
"secret_key": "", // <needed in case of aws non instance profile>
"bucket_name_prefix": "",
"bucket_name_suffix": "",
"account_key": "",
"account_name": "",
"azure_fqdn_suffix": "core.windows.net",
}
}
"external_object_storage": {
"enabled": false, // <true/false>
"create_bucket": true, // <true/false>
"storage_type": "s3", // <s3,azure,aws>
"fqdn": "", // <needed in the case of aws instance and non-instance profile>
"port": 443, // <needed in the case of aws instance and non-instance profile>
"region": "",
"access_key": "", // <needed in the case of aws non instance profile>
"secret_key": "", // <needed in the case of aws non instance profile>
"bucket_name_prefix": "",
"bucket_name_suffix": "",
"account_key": "",
"account_name": "",
"azure_fqdn_suffix": "core.windows.net",
},
"orchestrator": {
"external_object_storage": {
"enabled": false, // <true/false>
"create_bucket": true, // <true/false>
"storage_type": "s3", // <s3,azure>
"fqdn": "", // <needed in the case of aws instance and non-instance profile>
"port": 443, // <needed in the case of aws instance and non-instance profile>
"region": "",
"access_key": "", // <needed in case of aws non instance profile>
"secret_key": "", // <needed in case of aws non instance profile>
"bucket_name_prefix": "",
"bucket_name_suffix": "",
"account_key": "",
"account_name": "",
"azure_fqdn_suffix": "core.windows.net",
}
}
To rotate the blob storage credentials for Process Mining in Automation Suite the stored secrets must be updated with the new credentials. See Rotating blob storage credentials.
disable_presigned_url
flag to specify whether you would like to disable pre-signed URL access at global level. By default, pre-signed URLs are
enabled for the entire platform. The possible values are: true
and false
.
{
"disable_presigned_url" : true
}
{
"disable_presigned_url" : true
}
-
You can only change the default value of this parameter only for new installations. The operation is irreversible and does not apply to an existing cluster.
-
You can apply this configuration only to the entire platform. You cannot override the global configuration at product level.
To configure an external OCI-compliant registry, update the following parameters in the input.json
file:
Keys |
Value |
---|---|
|
Default value:
registry.uipath.com The URL or FQDN of the registry to be used by Automation Suite to host the container images. |
|
Authentication information to be used for pulling the Docker images from the registry. If one of the values is found in the input file, you must provide both of them when configuring the external registry. |
|
The registry pull secret. |
|
Default value:
registry.uipath.com The URL or FQDN of the registry to be used by Automation Suite to host the Helm chart of the service. |
|
Authentication information to be used for pulling Helm charts from the registry. If one of the values is found in the input file, you must provide both of them when configuring the external registry. |
registry_ca_cert |
The location of the CA file corresponding to the certificate configured for the registry. If the registry is signed by a private certificate authority hosted on your premises, you must provide it to establish the trust. |
pull_secret_value
, including the one using Docker. For details, see .
The following configuration sample shows a typical OCI-compliant registry setup:
{
"registries": {
"docker": {
"url": "registry.domain.io",
"username": "username",
"password": "password",
"pull_secret_value": "pull-secret-value"
},
"helm": {
"url": "registry.domain.io",
"username": "username",
"password": "password"
},
"trust": {
"enabled": true,
"public_key": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFNE4vSzNzK2VXUTJHU3NnTTJNcUhsdEplVHlqRQp1UC9sd0dNTnNNUjhUZTI2Ui9TTlVqSVpIdnJKcEx3YmpDc0ZlZUI3L0xZaFFsQzlRdUU1WFhITDZ3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==",
"detection_mode": false
}
},
"registry_ca_cert": "/etc/pki/ca-trust/extracted/ca-bundle.trust.crt"
}
{
"registries": {
"docker": {
"url": "registry.domain.io",
"username": "username",
"password": "password",
"pull_secret_value": "pull-secret-value"
},
"helm": {
"url": "registry.domain.io",
"username": "username",
"password": "password"
},
"trust": {
"enabled": true,
"public_key": "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFNE4vSzNzK2VXUTJHU3NnTTJNcUhsdEplVHlqRQp1UC9sd0dNTnNNUjhUZTI2Ui9TTlVqSVpIdnJKcEx3YmpDc0ZlZUI3L0xZaFFsQzlRdUU1WFhITDZ3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tCg==",
"detection_mode": false
}
},
"registry_ca_cert": "/etc/pki/ca-trust/extracted/ca-bundle.trust.crt"
}
uipath
,
uipath-check
, and uipath-installer
namespaces. To
define a custom namespace, provide a value for the optional namespace
parameter. If you do not provide a value for the namespace
parameter,
the default namespaces are used instead.
input.json
file. Make sure to add your own labels.
"namespace_labels": {
"install-type": "aksoffline",
"uipathctlversion": "rc-10_0.1",
"updatedLabel": "rerun"
},
"namespace_labels": {
"install-type": "aksoffline",
"uipathctlversion": "rc-10_0.1",
"updatedLabel": "rerun"
},
input.json
with the following flags. Make sure to provide the appropriate values to the spec
field.
"tolerations": [
{
"spec": {
"key": "example-key",
"operator": "Exists",
"value": "optional-value",
"effect": "NoSchedule"
}
},
{
"spec": {
"key": "example-key2",
"operator": "Exists",
"value": "optional-value2",
"effect": "NoSchedule"
}
}
]
"tolerations": [
{
"spec": {
"key": "example-key",
"operator": "Exists",
"value": "optional-value",
"effect": "NoSchedule"
}
},
{
"spec": {
"key": "example-key2",
"operator": "Exists",
"value": "optional-value2",
"effect": "NoSchedule"
}
}
]
input.json
file for each installation and fill in the JSON values
with different resources such has:
- FQDN - Each installation must have a different FQDN.
- Namespace - Each installation must have a different namespace. Provide the value of the namespace where you granted all the permissions for Automation Suite to be installed.
argocd.application_namespace
- It must be the same as the namespace where Automation Suite has all the permissions to be installed.argocd.project
- Default.fabric.redis
- Fill in the details about the different installations of redis, whether they are located internally or externally to the OpenShift cluster.- SQL - Fill in the details about the different databases, whether they are located internally or externally to the OpenShift cluster.
sql_connection_string _template
- Fill in the details about the different databases, whether they are located internally or externally to the OpenShift cluster.
These different resources can be the Redis service, the database, or the namespace. In Ingress Service Annotation, however, the load balancer IP is going to be a common IP pointing to the same OpenShift cluster.
orchestrator.orchestrator_robot_logs_elastic
section. If not provided, robot logs are saved to Orchestrator's database.
orchestrator.orchestrator_robot_logs_elastic
parameters:
Parameter |
Description |
---|---|
orchestrator_robot_logs_elastic |
Elasticsearch configuration. |
|
The address of the Elasticsearch instance that should be used. It should be provided in the form of a URI. If provided, then username and password are also required. |
|
The Elasticsearch username, used for authentication. |
|
The Elasticsearch password, used for authentication. |
If enabling Insights, users can include SMTP server configuration that will be used to send scheduled emails/alert emails. If not provided, scheduled emails and alert emails will not function.
insights.smtp_configuration
fields details:
Parameter |
Description |
---|---|
|
Valid values are
TLSv1_2 , TLSv1_1 , SSLv23 . Omit key altogether if not using TLS.
|
|
Address that alert/scheduled emails will be sent from. |
|
Hostname of the SMTP server. |
|
Port of the SMTP server. |
|
Username for SMTP server authentication. |
|
Password for SMTP server authentication. |
enable_realtime_monitoring | Flag to enable Insights Real-time monitoring. Valid values are true , false . Default value is false .
|
Example
"insights": {
"enabled": true,
"enable_realtime_monitoring": true,
"smtp_configuration": {
"tls_version": "TLSv1_2",
"from_email": "[email protected]",
"host": "smtp.sendgrid.com",
"port": 587,
"username": "login",
"password": "password123"
}
}
"insights": {
"enabled": true,
"enable_realtime_monitoring": true,
"smtp_configuration": {
"tls_version": "TLSv1_2",
"from_email": "[email protected]",
"host": "smtp.sendgrid.com",
"port": 587,
"username": "login",
"password": "password123"
}
}
processmining
section:
Parameter |
Description |
---|---|
|
DotNet formatted connection string with database set as a placeholder:
Initial Catalog=DB_NAME_PLACEHOLDER .
|
|
Note: This applies to PostgreSQL
AutomationSuite Airflow database and is recommended for version 2024.10.3 or higher.
Sqlalchemy PSYCOPG2 formatted connection string for custom airflow metadata database location:
PostgreSQL:5432/DB_NAME_PLACEHOLDER .
Example:
Note:
You can also use the
metadata_db_connection_str in the processmining section at the global level in the cluster_config.json to provide the value for the Airflow metadatabase. In this case, the airflow.metadata_db_connection_str is optional.
|
|
Note:
This is only applicable for Microsoft SQL Server
AutomationSuite Airflow database and can no longer be used starting from version 2025.10.
Sqlalchemy PYODBC formatted connection string for custom airflow metadata database location:
sqlServer:1433/DB_NAME_PLACEHOLDER .
Example:
where user:
testadmin%40myhost Note:
If there is '@' in user name it has to be urlencoded to %40 Example: (SQL Server setup with Kerberos authentication)
|
|
DotNet formatted SQL connection string to the processmining data warehouse SQL Server with placeholder for dbname: Initial Catalog=DB_NAME_PLACEHOLDER .
|
|
Sqlalchemy PYODBC formatted SQL connection string to the processmining data warehouse SQL Server with placeholder for dbname: sqlServer:1433/DB_NAME_PLACEHOLDER .
|
|
If the installer is creating databases through
sql.create_db: true setting, a DotNet formatted master SQL connection string must be provided for the processmining data warehouse SQL Server.
Database in the connection string must be set as master .
|
AutomationSuite_Airflow
"processmining": {
"enabled": true,
"app_security_mode": "system_managed",
"airflow": {
"metadata_db_connection_str": "postgresql+psycopg2://testadmin:<password>@sfdev8454496-postgresql.postgres.database.azure.com:5432/AutomationSuite_Airflow"
},
"warehouse": {
"sql_connection_str": "Server=tcp:kerberossql.autosuitead.local,1433;Initial Catalog=AutomationSuite_Warehouse;Persist Security Info=False;User Id=testadmin;Password='<password>';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin:<password>@kerberossql.autosuitead.local:1433/AutomationSuite_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
"master_sql_connection_str": "Server=tcp:kerberossql.autosuitead.local,1433;Initial Catalog=master;Persist Security Info=False;User Id=testadmin;Password='<password>';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
"processmining": {
"enabled": true,
"app_security_mode": "system_managed",
"airflow": {
"metadata_db_connection_str": "postgresql+psycopg2://testadmin:<password>@sfdev8454496-postgresql.postgres.database.azure.com:5432/AutomationSuite_Airflow"
},
"warehouse": {
"sql_connection_str": "Server=tcp:kerberossql.autosuitead.local,1433;Initial Catalog=AutomationSuite_Warehouse;Persist Security Info=False;User Id=testadmin;Password='<password>';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin:<password>@kerberossql.autosuitead.local:1433/AutomationSuite_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
"master_sql_connection_str": "Server=tcp:kerberossql.autosuitead.local,1433;Initial Catalog=master;Persist Security Info=False;User Id=testadmin;Password='<password>';MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
AutomationSuite_Airflow
"processmining": {
"enabled": true,
"app_security_mode": "system_managed",
"warehouse": {
"sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;User [email protected];Password='password';Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:<password>:@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
"master_sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=master;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
},
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:<password>@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES"
"sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Metadata;User [email protected];Password='password';Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
},
"processmining": {
"enabled": true,
"app_security_mode": "system_managed",
"warehouse": {
"sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Warehouse;User [email protected];Password='password';Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:<password>:@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_ProcessMining_Warehouse?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES",
"master_sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=master;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;"
},
"sqlalchemy_pyodbc_sql_connection_str": "mssql+pyodbc://testadmin%40sfdev4515230-sql.database.windows.net:<password>@sfdev4515230-sql.database.windows.net:1433/AutomationSuite_Airflow?driver=ODBC+Driver+17+for+SQL+Server&TrustServerCertificate=YES&Encrypt=YES"
"sql_connection_str": "Server=tcp:sfdev4515230-sql.database.windows.net,1433;Initial Catalog=AutomationSuite_ProcessMining_Metadata;User [email protected];Password='password';Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=True;Connection Timeout=30;Max Pool Size=100;",
},
Integrated Security
and Trusted_Connection
parameters. By setting Integrated Security
to true
and setting Trusted_Connection
to yes
, the credentials of the currently logged in user are used for the connection. In this case, you do not need to specify a
separate username and password.
AutomationSuite_Airflow
database, make sure you meet the following requirements.
-
Make sure that the timezone of the SQL Server machine where the Airflow database is installed, is set to UTC.
-
You must use the default server port
1433
for Airflow database connections. Non-standard SQL server ports are not supported.
When configuring the connection strings for the Process Mining data warehouse SQL Server, the named instance of the SQL Server should be omitted.
Named instances of SQL Server cannot operate on the same TCP port. Therefore, the port number alone is sufficient to distinguish between instances.
tcp:server,1445
instead of tcp:server\namedinstance,1445
.
postgresql_connection_string_template_sqlalchemy_pyodbc
(for PostgreSQL) respectively sql_connection_string_template_sqlalchemy_pyodbc
(for MS SQL Server) and the PYODBC connection string sqlalchemy_pyodbc_sql_connection_str
used when you bring your own database are different. Also connection string names are different for the template SQL sql_connection_string_template
and sql_connection_str
used when you bring your own database.
sql_connection_str
and sqlalchemy_pyodbc_sql_connection_str
or airflow.metadata_db_connection_str
connection strings in the processmining
section of the input.json
file, the template connection strings sql_connection_string_template
and postgresql_connection_string_template_sqlalchemy_pyodbc
(for PostgreSQL) respectively sql_connection_string_template_sqlalchemy_pyodbc
(for MS SQL Server) are ignored if specified.
Automation Suite Robots can use package caching to optimize your process runs and allow them to run faster. NuGet packages are fetched from the filesystem instead of being downloaded from the Internet/network. This requires an additional space of minimum 10GB and should be allocated to a folder on the host machine filesystem of the dedicated nodes.
input.json
parameters:
Parameter |
Default value |
Description |
---|---|---|
|
|
When set to
true , robots use a local cache for package resolution.
|
|
|
The disk location on the serverless agent node where the packages are stored. |
aicenter.external_object_storage.port
and
aicenter.external_object_storage.fqdn
parameters in the input.json
file.
aicenter
section of the input.json
file even if you
have configured the external_object_storage
section of the file.
input.json
configuration for AI Center:"aicenter": {
"external_object_storage" {
"port": 443,
"fqdn": "s3.us-west-2.amazonaws.com"
}
},
"external_object_storage": {
"enabled": true,
"create_bucket": false,
"storage_type": "s3",
"region": "us-west-2",
"use_instance_profile": true
}
...
"aicenter": {
"external_object_storage" {
"port": 443,
"fqdn": "s3.us-west-2.amazonaws.com"
}
},
"external_object_storage": {
"enabled": true,
"create_bucket": false,
"storage_type": "s3",
"region": "us-west-2",
"use_instance_profile": true
}
...
- UiPath® products
- Bring your own components
- Excluding Istio
- Excluding Cert Manager
- Certificate configuration
- Infrastructure prerequisites
- External Objectstore configuration
- General configuration
- Product-specific configuration
- Rotating the blob storage credentials for Process Mining
- Pre-signed URL configuration
- External OCI-compliant registry configuration
- Custom namespace configuration
- Custom namespace label configuration
- Custom node toleration configuration
- Installing multiple Automation Suite instances on a single cluster
- Orchestrator-specific configuration
- Insights-specific configuration
- Process Mining-specific configuration
- Automation Suite Robots-specific configuration
- AI Center-specific configuration