- Release notes
- Before you begin
- Managing access
- Getting started
- Integrations
- Working with process apps
- Working with dashboards and charts
- Working with process graphs
- Working with Discover process models and Import BPMN models
- Showing or hiding the menu
- Context information
- Export
- Filters
- Sending automation ideas to UiPath® Automation Hub
- Tags
- Due dates
- Compare
- Conformance checking
- Process simulation
- Root cause analysis
- Simulating automation potential
- Starting a Task Mining project from Process Mining
- Triggering an automation from a process app
- Viewing Process data
- Creating apps
- Loading data
- Transforming data
- Structure of transformations
- Tips for writing SQL
- Exporting and importing transformations
- Viewing the data run logs
- Merging event logs
- Configuring Tags
- Configuring Due dates
- Configuring fields for Automation potential
- Activity Configuration: Defining activity order
- Making the transformations available in dashboards
- Data models
- Adding and editing processes
- Customizing dashboards
- Publishing process apps
- App templates
- Notifications
- Additional resources

Process Mining
The required tables for the process app are displayed in the Required tables section on the Select data source page.
-
Drag and drop the files required for the input tables of the process app, or select the Select files icon
to select the files from your computer. The file or files are uploaded.
Note:If any required tables are missing, you can upload additional files using the +Upload file(s) option.
A new table is added in the Source tables section for each uploaded file and is automatically mapped to the related input table in the Target tables section.
Tip:A blue dot indicates that new data was uploaded for the table. This can be either for a new table, or an existing table.
-
Make sure each table is mapped to the correct target table. If required, select a different table from the Target tables list to correct the mapping.
-
Select Next. The Fields page is displayed where you can continue with mapping the input fields.
If you upload a file that is not listed as a required table, a new table is automatically added in the Source tables section for each uploaded file and a corresponding input table is created in the Target tables section. By default, the file name of the uploaded file is used as the name of the tables.
A warning message is displayed indicating the table needs configuration before data can be uploaded for the table. When a new table is uploaded, it becomes available in data transformations. However, further steps are required to make this data visible on the dashboards. First, the table data must be loaded using a SQL query. Then, the table should be incorporated into the data model of the process app. Refer to Data models for more information on how to add a table in the data model.
The settings for the target input table are automatically detected and you just need to check them.
Follow these steps to edit the settings for an input table.
-
Locate the table you want to configure and select the Edit table icon to open the Edit table panel for the selected table.
-
Edit the settings as desired and select Save.
The following table describes the table settings.
|
Setting |
Description |
|
Table name |
The name of the input table in Data transformations. |
|
Mandatory |
Option to define the table as mandatory. If
TRUE, the table will be required later when publishing or importing the process app. If the table is not uploaded then, an error
is thrown. If FALSE, the table is considered as Optional., when publishing or importing the app. If the table is not uploaded then, an empty table will be created such that subsequent
SQL queries will not fail.
|
|
Encoding |
The encoding used in the file. |
|
Delimiter |
The delimiter separating the different fields. |
|
Line ending |
The character that is used to denote the end of a line and the start of a new one. |
|
Quote character |
The quote character used in case fields are wrapped within quotes. |
|
Load type |
The load type for the table. Note:
If you select Incremental as the Load type, you must specify additional settings to configure incremental load for the table. |
With a full load, all data is extracted from the source and loaded into the target system, regardless of whether the data has changed or remained the same since the last load. An incremental load only extracts data that has changed (added or updated) since the last load. An incremental loads is typically faster and less resource-intensive than a full load, especially when dealing with large volumes of data where only a small subset may change frequently.
To enable incremental data load for a table, you must set the Load type for the table to Incremental. Incremental load requires a unique identifier to ensure that the data is loaded correctly, and a field (such as a timestamp or a version) to track changes in the source data.
The following table describes the additional settings required for Incremental load.
|
Setting |
Description |
|
Primary keys | The primary key field or fields that uniquely identify each record in the data source. |
|
Timestamp field |
The field that is used to track when each record was last updated or added. |
|
Timestamp format |
The format of the timestamp value used in the Timestamp field. |
The extraction method used for loading data may require additional or specific configuration for incremental extraction.
Check out Loading data using CData Sync for more information on how to set up incremental extraction for CData Sync.
Check out Loading data using DataUploader for more information on how to set up DataUploader for incremental extraction.