- Overview
- Requirements
- Installation
- Post-installation
- Cluster administration
- Managing products
- Managing the cluster in ArgoCD
- Setting up the external NFS server
- Automated: Enabling the Backup on the Cluster
- Automated: Disabling the Backup on the Cluster
- Automated, Online: Restoring the Cluster
- Automated, Offline: Restoring the Cluster
- Manual: Enabling the Backup on the Cluster
- Manual: Disabling the Backup on the Cluster
- Manual, Online: Restoring the Cluster
- Manual, Offline: Restoring the Cluster
- Additional configuration
- Migrating objectstore from persistent volume to raw disks
- Monitoring and alerting
- Migration and upgrade
- Migration options
- Step 1: Moving the Identity organization data from standalone to Automation Suite
- Step 2: Restoring the standalone product database
- Step 3: Backing up the platform database in Automation Suite
- Step 4: Merging organizations in Automation Suite
- Step 5: Updating the migrated product connection strings
- Step 6: Migrating standalone Insights
- Step 7: Deleting the default tenant
- B) Single tenant migration
- Product-specific configuration
- Best practices and maintenance
- Troubleshooting
- How to Troubleshoot Services During Installation
- How to Uninstall the Cluster
- How to clean up offline artifacts to improve disk space
- How to clear Redis data
- How to enable Istio logging
- How to manually clean up logs
- How to clean up old logs stored in the sf-logs bucket
- How to disable streaming logs for AI Center
- How to debug failed Automation Suite installations
- How to delete images from the old installer after upgrade
- How to automatically clean up Longhorn snapshots
- How to disable TX checksum offloading
- How to address weak ciphers in TLS 1.2
- Unable to run an offline installation on RHEL 8.4 OS
- Error in Downloading the Bundle
- Offline installation fails because of missing binary
- Certificate issue in offline installation
- First installation fails during Longhorn setup
- SQL connection string validation error
- Prerequisite check for selinux iscsid module fails
- Azure disk not marked as SSD
- Failure After Certificate Update
- Automation Suite not working after OS upgrade
- Automation Suite Requires Backlog_wait_time to Be Set 1
- Volume unable to mount due to not being ready for workloads
- RKE2 fails during installation and upgrade
- Failure to upload or download data in objectstore
- PVC resize does not heal Ceph
- Failure to Resize Objectstore PVC
- Rook Ceph or Looker pod stuck in Init state
- StatefulSet volume attachment error
- Failure to create persistent volumes
- Storage reclamation patch
- Backup failed due to TooManySnapshots error
- All Longhorn replicas are faulted
- Setting a timeout interval for the management portals
- Update the underlying directory connections
- Cannot Log in After Migration
- Kinit: Cannot Find KDC for Realm <AD Domain> While Getting Initial Credentials
- Kinit: Keytab Contains No Suitable Keys for *** While Getting Initial Credentials
- GSSAPI Operation Failed With Error: An Invalid Status Code Was Supplied (Client's Credentials Have Been Revoked).
- Alarm Received for Failed Kerberos-tgt-update Job
- SSPI Provider: Server Not Found in Kerberos Database
- Login Failed for User <ADDOMAIN><aduser>. Reason: The Account Is Disabled.
- ArgoCD login failed
- Failure to get the sandbox image
- Pods not showing in ArgoCD UI
- Redis Probe Failure
- RKE2 Server Fails to Start
- Secret Not Found in UiPath Namespace
- After the Initial Install, ArgoCD App Went Into Progressing State
- MongoDB pods in CrashLoopBackOff or pending PVC provisioning after deletion
- Unexpected Inconsistency; Run Fsck Manually
- Degraded MongoDB or Business Applications After Cluster Restore
- Missing Self-heal-operator and Sf-k8-utils Repo
- Unhealthy Services After Cluster Restore or Rollback
- RabbitMQ pod stuck in CrashLoopBackOff
- Prometheus in CrashloopBackoff state with out-of-memory (OOM) error
- Missing Ceph-rook metrics from monitoring dashboards
- Pods cannot communicate with FQDN in a proxy environment
- Using the Automation Suite Diagnostics Tool
- Using the Automation Suite support bundle
- Exploring Logs

Automation Suite installation guide
Step 4: Configuring the DNS
linkThe previous step is Step 3: Configuring Azure SQL.
For production deployments: Configuring the DNS
linkFor production deployments, DNS configuration is required. Follow the steps in Configuring the DNS to configure DNS using the provider specific to your environment.
For test or evaluation purposes, Creating a Private DNS zone in Azure can be used as a workaround. See the following section for detailed steps.
For test or evaluation deployments: Creating a private DNS zone
linkThis is a workaround for testing purposes. For production, please follow the DNS configuration guide specific to your environment.
Make sure to configure the host entries in the client machine after the DNS zone is created. This is a requirement for any type of deployment.
Creating a private DNS zone
- Search for Private DNS Zone in the search bar.
- Select Create.
- Create a private DNS zone in same Resource Group:
- For single-node evaluation: the name of the Private DNS Zone should be the DNS name of the VM:
-
For multi-node HA-ready production: the name of the Private DNS Zone should be the DNS name of the Load Balancer.
- Select Review and Create.
- Then select Create.
- Once the Private DNS Zone is created, select Go to resource.
Adding a record set for all subdomains
- Select Record Set.
- Enter
*
in the Name field. - The IP address of the record set should be the VM’s IP address (in the case of single-node evaluation) or the load balancer’s
IP address (in the case of multi-node HA-ready production).
- Then select OK.
Adding a record set for the root domain
- Add a
A Record
for@
, which will include root, and redirect the traffic to the cluster. - Go to Record Set.
- Set the IP address to be the VM’s IP address (in the case of single-node evaluation) or the load balancer’s IP address (in
the case of multi-node HA-ready production).
- All the VMs should already be under one Virtual Network.
Adding a virtual network link to the private DNS zone
- Go to Virtual network Links under settings.
- Select Add.
- Create a new link name, and specify the Virtual network that the VMs are under.
- To find the virtual network link, see the following under the VM homepage:
- Wait until the link is created and appears as follows:
Configuring a client machine to access the cluster
Once you configure the DNS, you need to add the host entries to the client machines.
This is mandatory to perform for all deployments, both evaluation/test and production.
/etc/hosts
host entry is only needed in the client machines, not the cluster node.
Windows
- Run the following commands from the Elevated PowerShell (Run as Administrator), to add a host entry in the
\etc\hosts
file.Tip:- Replace 10.10.10.10 with the IP Address of the VM (in the case of single-node evaluation) or the IP Address of the Load Balancer (in the case of multi-node HA-ready production).
- Replace dns-123.eastus.cloudapp.azure.com with the FQDN of the cluster (DNS name of the VM for single-node evaluation and the FQDN of the Load Balancer for multi-node HA-ready production).
- There should be a `n (backtick n) before the IP address.
Add-Content -Path $env:windir\System32\drivers\etc\hosts -Value "`n10.10.10.10 dns-123.eastus.cloudapp.azure.com" -Force Add-Content -Path $env:windir\System32\drivers\etc\hosts -Value "`n10.10.10.10 alm.dns-123.eastus.cloudapp.azure.com" -Force Add-Content -Path $env:windir\System32\drivers\etc\hosts -Value "`n10.10.10.10 objectstore.dns-123.eastus.cloudapp.azure.com" -Force Add-Content -Path $env:windir\System32\drivers\etc\hosts -Value "`n10.10.10.10 registry.dns-123.eastus.cloudapp.azure.com" -Force Add-Content -Path $env:windir\System32\drivers\etc\hosts -Value "`n10.10.10.10 monitoring.dns-123.eastus.cloudapp.azure.com" -Force Add-Content -Path $env:windir\System32\drivers\etc\hosts -Value "`n10.10.10.10 insights.dns-123.eastus.cloudapp.azure.com" -Force
Add-Content -Path $env:windir\System32\drivers\etc\hosts -Value "`n10.10.10.10 dns-123.eastus.cloudapp.azure.com" -Force Add-Content -Path $env:windir\System32\drivers\etc\hosts -Value "`n10.10.10.10 alm.dns-123.eastus.cloudapp.azure.com" -Force Add-Content -Path $env:windir\System32\drivers\etc\hosts -Value "`n10.10.10.10 objectstore.dns-123.eastus.cloudapp.azure.com" -Force Add-Content -Path $env:windir\System32\drivers\etc\hosts -Value "`n10.10.10.10 registry.dns-123.eastus.cloudapp.azure.com" -Force Add-Content -Path $env:windir\System32\drivers\etc\hosts -Value "`n10.10.10.10 monitoring.dns-123.eastus.cloudapp.azure.com" -Force Add-Content -Path $env:windir\System32\drivers\etc\hosts -Value "`n10.10.10.10 insights.dns-123.eastus.cloudapp.azure.com" -Force - Verify that the entries are added at the bottom of the
\etc\hosts
file.Get-Content -Path $env:windir\System32\drivers\etc\hosts
Get-Content -Path $env:windir\System32\drivers\etc\hosts
MacOS or Linux
- Run the following commands from terminal. This is to add a host entry in the
/etc/hosts
file.Tip:- Replace 10.10.10.10 with the IP Address of the VM (in the case single-node evaluation) or the IP Address of the Load Balancer (in the case of multi-node HA-ready production).
- Replace dns-123.eastus.cloudapp.azure.com with the FQDN of the cluster (DNS name of the VM for single-node evaluation and the FQDN of the Load Balancer for multi-node HA-ready production).
sudo bash -c "echo \"10.10.10.10 dns-123.eastus.cloudapp.azure.com\" >> /etc/hosts" sudo bash -c "echo \"10.10.10.10 alm.dns-123.eastus.cloudapp.azure.com\" >> /etc/hosts" sudo bash -c "echo \"10.10.10.10 objectstore.dns-123.eastus.cloudapp.azure.com\" >> /etc/hosts" sudo bash -c "echo \"10.10.10.10 registry.dns-123.eastus.cloudapp.azure.com\" >> /etc/hosts" sudo bash -c "echo \"10.10.10.10 monitoring.dns-123.eastus.cloudapp.azure.com\" >> /etc/hosts" sudo bash -c "echo \"10.10.10.10 insights.dns-123.eastus.cloudapp.azure.com\" >> /etc/hosts"
sudo bash -c "echo \"10.10.10.10 dns-123.eastus.cloudapp.azure.com\" >> /etc/hosts" sudo bash -c "echo \"10.10.10.10 alm.dns-123.eastus.cloudapp.azure.com\" >> /etc/hosts" sudo bash -c "echo \"10.10.10.10 objectstore.dns-123.eastus.cloudapp.azure.com\" >> /etc/hosts" sudo bash -c "echo \"10.10.10.10 registry.dns-123.eastus.cloudapp.azure.com\" >> /etc/hosts" sudo bash -c "echo \"10.10.10.10 monitoring.dns-123.eastus.cloudapp.azure.com\" >> /etc/hosts" sudo bash -c "echo \"10.10.10.10 insights.dns-123.eastus.cloudapp.azure.com\" >> /etc/hosts" - Verify that the entries are added at the bottom of the
/etc/hosts
file.cat /etc/hosts
cat /etc/hosts
- For production deployments: Configuring the DNS
- For test or evaluation deployments: Creating a private DNS zone
- Creating a private DNS zone
- Adding a record set for all subdomains
- Adding a record set for the root domain
- Adding a virtual network link to the private DNS zone
- Configuring a client machine to access the cluster