- Overview
- Requirements
- Installation
- Post-installation
- Migration and upgrade
- Upgrading Automation Suite on EKS/AKS
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Orchestrator
- Step 7: Migrating standalone Insights
- Step 8: Deleting the default tenant
- B) Single tenant migration
- Migrating from Automation Suite on Linux to Automation Suite on EKS/AKS
- Monitoring and alerting
- Cluster administration
- Product-specific configuration
- Troubleshooting
- The backup setup does not work due to a failure to connect to Azure Government
- Pods in the uipath namespace stuck when enabling custom node taints
- Unable to launch Automation Hub and Apps with proxy setup
- Pods cannot communicate with FQDN in a proxy environment
- Test Automation SQL connection string is ignored
Networking
You must provision and configure Azure or AWS networking resources to ensure that Automation Suite on your cluster has connectivity and access to the cloud infrastructure prerequisites (e.g., storage, database, cache, and DNS). Depending on your networking architecture, this may include configuring VNETs / VPC, DNS, subnets, NSGs / security groups, NAT gateway, elastic IP, and internet gateway. For details, see Deployment scenarios.
Note that, based on the workload scaling, more replicas might be needed. By default, the HA mode requires two replicas and can go up to ten or more replicas. Make sure your network supports this scaling level.
You can use any CNI as long as pods can communicate with one another. Our recommendations are Azure CNI or Calico.
There are special considerations for cloud CNIs such as Azure CNI and Amazon VPC CNI, which do not support internal or private pod networking subnets. The number of pods required for Automation Suite depends on your product selection and workload scaling. For instance, for a deployment with all the services enabled and at high utilization, you might need over 400 IPs to support the scaling requirements. Because of this, we recommend allocating a CIDR range of at least /23.
Automation Suite does not support the IPv6 internet protocol.
If you have a custom ingress controller (NGINX), refer to Configuring NGINX ingress and skip the rest of the page.
Automation Suite provisions a load balancer on your behalf during installation. The load balancer must be assigned with public or private IP addresses to route the incoming FQDN requests. You have two options to configure the load balancer:
-
Preallocated IPs: Allocate public or private IPs for the load balancer, configure the DNS records to map the FQDNs to these IPs, and provide these IPs as part of the ingress section of
input.json
. -
Dynamically allocated IPs: If you do not provide an IP address, Automation Suite dynamically allocates IPs from the cluster subnet to the load balancer.
The network security groups on the load balancer must allow HTTPS traffic from end clients via port 443. By default, we configure the load balancer to conduct regular TCP health checks.
If using your own ingress like NGINX, make sure you meet the network requirements documented in Configuring NGINX ingress controller. When using Istio that deploys an NLB, note that it typically creates three listeners, which include ports 80, 443, and 15021. However, this is a typical setup, and your actual requirements may differ based on your exact circumstances, so adjust as needed.
ingress
section of input.json
.
For a list of service annotations in EKS, see the AWS Load Balancer documentation.
For a list of service annotations in AKS, see the Azure Load Balancer documentation.
ingress.service_annotations
section in input.json
. You must deploy an AWS load balancer controller on your EKS cluster prior to the installation for the examples to properly work.
The following example shows how to allocate Elastic IPs from AWS and provision a public load balancer. If you use this example as a starting point for your configuration, make sure to replace the IPs with actual values.
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "ssl",
"service.beta.kubernetes.io/aws-load-balancer-eip-allocations": "<elastic_ip_id_0>,<elastic_ip_id_1>",
"service.beta.kubernetes.io/aws-load-balancer-nlb-target-type": "ip",
"service.beta.kubernetes.io/aws-load-balancer-scheme": "internet-facing",
"service.beta.kubernetes.io/aws-load-balancer-type": "nlb"
}
}
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "ssl",
"service.beta.kubernetes.io/aws-load-balancer-eip-allocations": "<elastic_ip_id_0>,<elastic_ip_id_1>",
"service.beta.kubernetes.io/aws-load-balancer-nlb-target-type": "ip",
"service.beta.kubernetes.io/aws-load-balancer-scheme": "internet-facing",
"service.beta.kubernetes.io/aws-load-balancer-type": "nlb"
}
}
The following example shows how to allocate private IPs to an internal load balancer from the EKS cluster subnets. If you use this example as a starting point for your configuration, make sure to update the IPs and subnets with actual values.
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "ssl",
"service.beta.kubernetes.io/aws-load-balancer-nlb-target-type": "ip",
"service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses":"<IP_0>,<IP_1>",
"service.beta.kubernetes.io/aws-load-balancer-subnets": "<SUBNET_ID_0>,<SUBNET_ID_1>",
"service.beta.kubernetes.io/aws-load-balancer-scheme": "internal",
"service.beta.kubernetes.io/aws-load-balancer-type": "nlb"
"service.beta.kubernetes.io/aws-load-balancer-target-group-attributes": "preserve_client_ip.enabled=false"
}
}
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "ssl",
"service.beta.kubernetes.io/aws-load-balancer-nlb-target-type": "ip",
"service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses":"<IP_0>,<IP_1>",
"service.beta.kubernetes.io/aws-load-balancer-subnets": "<SUBNET_ID_0>,<SUBNET_ID_1>",
"service.beta.kubernetes.io/aws-load-balancer-scheme": "internal",
"service.beta.kubernetes.io/aws-load-balancer-type": "nlb"
"service.beta.kubernetes.io/aws-load-balancer-target-group-attributes": "preserve_client_ip.enabled=false"
}
}
IPs and subnets must match.
<IP_0>
is in <SUBNET_0>
and <IP_1>
in <SUBNET_1>
.
The following example shows how to allocate public IPs from Azure and provision a public load balancer. If you use this example as a starting point for your configuration, make sure to update the IPs with actual values.
...
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/azure-load-balancer-internal": "false",
"service.beta.kubernetes.io/azure-load-balancer-ipv4": "<IP>"
}
}
...
...
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/azure-load-balancer-internal": "false",
"service.beta.kubernetes.io/azure-load-balancer-ipv4": "<IP>"
}
}
...
The following example shows how to allocate private IPs to an internal load balancer from the AKS cluster subnets. If you use this example as a starting point for you configuration, make sure to update the IPs and subnets with actual values.
...
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/azure-load-balancer-internal": "true",
"service.beta.kubernetes.io/azure-load-balancer-ipv4": "<IP>",
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet": "<SUBNET_0>", "<SUBNET_1>"
}
}
...
...
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/azure-load-balancer-internal": "true",
"service.beta.kubernetes.io/azure-load-balancer-ipv4": "<IP>",
"service.beta.kubernetes.io/azure-load-balancer-internal-subnet": "<SUBNET_0>", "<SUBNET_1>"
}
}
...
The IPs and subnets must match.
<IP_0>
is in <SUBNET_0>
and <IP_1>
in <SUBNET_1>
.
Ensure that the DNS records are configured to map the following UiPath® FQDNs to the load balancer:
-
FQDN
-
alm.FQDN
-
monitoring.FQDN
-
insights.FQDN
(if installing UiPath Insights)
The FQDN is one of the prerequisite checks before installation. If you do not provide an IP address or have not yet done the FQDN mapping, the check will fail.
input.json
, Automation Suite dynamically allocates the private IPs from the worker node subnets. In this scenario, run the Automation
Suite installation in the following way.
...
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "ssl",
"service.beta.kubernetes.io/aws-load-balancer-nlb-target-type": "ip",
"service.beta.kubernetes.io/aws-load-balancer-scheme": "internal",
"service.beta.kubernetes.io/aws-load-balancer-type": "nlb",
"service.beta.kubernetes.io/aws-load-balancer-internal": true
}
}
...
...
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/aws-load-balancer-backend-protocol": "ssl",
"service.beta.kubernetes.io/aws-load-balancer-nlb-target-type": "ip",
"service.beta.kubernetes.io/aws-load-balancer-scheme": "internal",
"service.beta.kubernetes.io/aws-load-balancer-type": "nlb",
"service.beta.kubernetes.io/aws-load-balancer-internal": true
}
}
...
...
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/azure-load-balancer-internal": "true"
}
}
...
...
"ingress": {
"service_annotations": {
"service.beta.kubernetes.io/azure-load-balancer-internal": "true"
}
}
...
In this scenario, run the installer as follows:
-
Run the installer only until the provisioning of the Load Balancer:
uipathctl manifest apply <INPUT_JSON> --versions <VERSIONS_JSON> --override=gateway
uipathctl manifest apply <INPUT_JSON> --versions <VERSIONS_JSON> --override=gateway -
Retrieve the load balancer host name:
kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' -
Configure your DNS with FQDNs mapped to the load balancer endpoint or IPs.
-
Re-run the installer to complete the installation:
uipathctl manifest apply input.json --versions versions.json
uipathctl manifest apply input.json --versions versions.json
Note that without the DNS mapping, the FQDN prerequisite check will fail. Prerequisite checks are meant to give you confidence that you have provisioned all the prerequisites correctly before installing Automation Suite. The FQDN check does not prevent you from installing Automation Suite.