process-mining
latest
false
UiPath logo, featuring letters U and I in white

Process Mining

Last updated May 1, 2025

Selecting the data source

You can use a sample dataset, upload a dataset with .tsv files, or load data using an extractor. The data is loaded after you have created the new process app.
Important:

For performance and security reasons, it is strongly recommended to use a small dataset for app development and testing data transformations.

The development dataset is used for testing the data transformations. It does not affect the data displayed in the dashboards of the published process app.

Once your app is ready to be used by business users, you can publish the app and load new data for use in the published process app.

  1. Select the applicable option for your data source.

  2. Select Next.

Use Theobald extractor

Note:

Use Theobald extractor is the recommended option for process apps that use an SAP source system.

If you selected an app template that uses an SAP source system, the Use Theobald extractor option is the default option for loading data.

You can copy the details for use in the extractor in the Upload data using Theobald step later in the app creation process. Refer to Finishing the app creation.

Refer to Loading data using Theobald Xtract Universal for more information.

Use CData extractor

Note:

The Use CData extractor option is the default option for app templates that use a source system that is supported by CData.

You can copy the details for use in the extractor in the Upload data using CData step later in the app creation process. See Finishing the app creation.

Refer to Loading data using CData Sync for more information.

Use sample data

Note: The Use sample data option is only enabled if sample data is available for the process app.

Upload data

It is also possible to upload data using .csv files.
Warning:

For large amounts of data, it is recommended to use CData Sync or Theobald Xtract Universal (for SAP) to upload data.

You can also use the DataUploader to upload data files up to 5TB each directly into a Process Mining process app.

Note: When you create a new process app, always make sure that the data is in the required format for the app template that you use to create a new app. Also check out App Templates.
Note: Table names and field names are case-sensitive. Always make sure that the field names (column headers) in your dataset match the field names listed in the input tables and that the file names match the table names.

Follow these steps to upload data files.

Mapping input tables

Note:

The required tables for the process app are displayed in the Required tables section on the Tables page.

  1. Drag and drop the files required for the input tables of the process app, or select the Select files iconSelect files icon to select the files from your computer. The file or files are uploaded.

    Note:

    If any required tables are missing, you can upload additional files using the +Upload file(s) option.

    A new table is added in the Source tables section for each uploaded file and is automatically mapped to the related input table in the Target tables section.

    Tip:

    A blue dot indicates that new data was uploaded for the table. This can be either for a new table, or an existing table.

  2. Make sure each table is mapped to the correct target table. If required, select a different table from the Target tables list to correct the mapping.

  3. Select Next. The Fields page is displayed where you can continue with mapping the input fields.

If you upload a file that is not listed as a required table, a new table is automatically added in the Source tables section for each uploaded file and a corresponding input table is created in the Target tables section. By default, the file name of the uploaded file is used as the name of the tables.

Note:

A warning message is displayed indicating the table needs configuration before data can be uploaded for the table. When a new table is uploaded, it becomes available in data transformations. However, further steps are required to make this data visible on the dashboards. First, the table data must be loaded using a SQL query. Then, the table should be incorporated into the data model of the process app. Refer to Data models for more information on how to add a table in the data model.

Configuring input tables

The settings for the target input table are automatically detected and you just need to check them.

Follow these steps to edit the settings for an input table.

  1. Locate the table you want to configure and select the Edit table icon to open the Edit table panel for the selected table.

  2. Edit the settings as desired and select Save.

The following table describes the table settings.

Setting

Description

Table name

The name of the input table in Data transformations.

Mandatory

Option to define the table as mandatory.

If TRUE, the table will be required later when publishing or importing the process app. If the table is not uploaded then, an error is thrown. If FALSE, the table is considered as Optional., when publishing or importing the app. If the table is not uploaded then, an empty table will be created such that subsequent SQL queries will not fail.

Encoding

The encoding used in the file.

Delimiter

The delimiter separating the different fields.

Line ending

The character that is used to denote the end of a line and the start of a new one.

Quote character

The quote character used in case fields are wrapped within quotes.

Load type

The load type for the table.

Note:

If you select Incremental as the Load type, you must specify additional settings to configure incremental load for the table.

Incremental data load

With a full load, all data is extracted from the source and loaded into the target system, regardless of whether the data has changed or remained the same since the last load. An incremental load only extracts data that has changed (added or updated) since the last load. An incremental loads is typically faster and less resource-intensive than a full load, especially when dealing with large volumes of data where only a small subset may change frequently.

To enable incremental data load for a table, you must set the Load type for the table to Incremental. Incremental load requires a unique identifier to ensure that the data is loaded correctly, and a field (such as a timestamp or a version) to track changes in the source data.

The following table describes the additional settings required for Incremental load.

Setting

Description

Primary keys

The primary key field or fields that uniquely identify each record in the data source.

Timestamp field

The field that is used to track when each record was last updated or added.

Timestamp format

The format of the timestamp value used in the Timestamp field.

Important:

The extraction method used for loading data may require additional or specific configuration for incremental extraction.

Check out Loading data using CData Sync for more information on how to set up incremental extraction for CData Sync.

Check out Loading data using DataUploader for more information on how to set up DataUploader for incremental extraction.

Attention: If you switch back the Load type to Full, make sure to configure the extraction method used accordingly.

Mapping input fields

Note:

For the selected table, the required input fields for the table are displayed in the Required fields section on the Fields page.

The source fields detected in the input table are automatically mapped to the corresponding fields in the target table.

  1. Make sure each field is mapped to the correct target field. If required, select a different field from the Target fields list to correct the mapping.

  2. Select Next to continue.

Configuring input fields

The settings for the target input fields are automatically detected and you just need to check them.

Follow these steps to edit the settings for an input field.

  1. Locate the field you want to configure and select the Edit field icon to open the Edit field panel for the selected field.

  2. Edit the settings as desired and select Save.

The following table describes the table settings.

Setting

Description

Name

The name of the field.

Note:

Name is a mandatory field.

Type

The data type of the field.

  • Text

  • Integer

  • Decimal

  • Boolean

  • Date

  • Datetime

Note:

Depending on field type you must specify parse settings to configure the field.

Mandatory

Option to define the field as mandatory.

If selected, the field is required when publishing or importing the process app. An error is thrown then if the field is missing. If not selected, the field is considered optional. When it is missing, the field will be added with NULL values, such that subsequent SQL queries will not fail.

Unique

Option to define the field value have a distinct or unique value for each record.

Not NULL

Option to define that the field must have a value for each record. The field cannot be left empty or filled with a NULL value.

Parse settings for field types

The following table describes the available parse settings for the different field types.

Field type

Parse settings

Integer

Thousand separator

  • None

  • Dot (.)

  • Comma (,)

Decimal

  • Decimal separator

    • Dot (.)

    • Comma (,)

  • Thousand separator

    • None

    • Dot (.)

    • Comma (,)

Boolean

  • True value:

    TRUE or 1
  • False value

    FALSE or 0
Note:

True value and False value are mandatory settings and must be different.

Date

Date format

Datetime

Date time format

Upload data using extractor

Note: When loading data using an extractor, the data is loaded into a blob store. Therefore, when loading data, Process Mining Automation CloudTM cannot check the IP-address. This means that if IP Restriction is set up for your tenant, it will not be enforced when loading data using an extractor from a machine that is not in a trusted IP range, and the data is uploaded to Process Mining.
Note:

You can copy the details for use in the extractor in the Upload data using extractor step later in the app creation process. Refer to Finishing the app creation.

[Preview] Upload data using direct connection

Note:

The Upload data using direct connection option is only available for app templates that use a ServiceNow source system or Salesforce source system.

Note:

The Upload data using direct connection option is the default option for app templates that use a source system for which a direct connection is available.

Prerequisites

Upload data using a direct connection uses Integration Service connections. This implies that you need to have:

  • a license for Integration Service;

  • Integration Service enabled on your tenant;

  • access to Orchestrator.

Integration Service connections are restricted by folder. If you want to use a connection from a specific folder, you need to have access to that folder in Orchestrator to see it in Process Mining. If you create a new connection from Process Mining, this connection is created in your personal workspaces in Orchestrator.

Refer to the Integration Service guide for more information on Integration Service licensing and Integration Service connections.

Setting up a direct connection

You can set up using a direct connection to your source system from the Selecting the data source step instead of setting up a connection using CData Sync.

The Upload data using direct connection option loads data in to your process app directly from the source system.

Follow these steps to set up a direct connection to the source system:

  1. Select the Upload data using direct connection option.

    The source system used for the app template displayed.

  2. Select Connect.

    A new browser tab is opened where you can enter the authentication details for the connection.

Attention:

If you are using a process specific app template, make sure the user credentials have access to the default list of tables and fields specified in the app template. Refer to App Templates for details.

Refer to [Preview] Loading data using direct connection for more information and a detailed description on how to set up a connection for a specific source system.

Troubleshooting

Refer to Integrated extractors in the Basic troubleshooting guide.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2025 UiPath. All rights reserved.