process-mining
latest
false
UiPath logo, featuring letters U and I in white

Process Mining

Last updated May 1, 2025

Managing input data

You can configure the input tables directly from Data Transformations to ensure your input data meets the requirements for your process app. The Manage input data settings option allows you to easily configure existing input tables ore create new input tables from newly uploaded files.

Follow these steps to manage input data settings.

  1. In the Input section, locate the Tables folder.

  2. Locate the table for which you want to manage the files and select context menu icon to open the Edit table panel for the selected table.

The Tables page is displayed, showing the source tables that are already present for your process app.

Mapping and configuring input tables

When you upload one or more files, a new table is automatically added in the Source tables section for each uploaded file and a corresponding input table is created in the Target tables section. By default, the file name of the uploaded file is used as the name of the tables. A warning message is displayed indicating the table needs configuration before data can be uploaded for the table.

Mapping input tables

Your input data must meet the data model defined for your app. If needed, you can map a Source table to a different Target table.

Configuring input tables

The settings for the target input table are automatically detected and you just need to check them.

Follow these steps to edit the settings for an input table.

  1. Locate the table you want to configure and select the Edit table icon to open the Edit table panel for the selected table.

  2. Edit the settings as desired and select Save.

The following table describes the table settings.

Setting

Description

Table name

The name of the input table in Data transformations.

Mandatory

Option to define the table as mandatory.

If TRUE, the table will be required later when publishing or importing the process app. If the table is not uploaded then, an error is thrown. If FALSE, the table is considered as Optional., when publishing or importing the app. If the table is not uploaded then, an empty table will be created such that subsequent SQL queries will not fail.

Encoding

The encoding used in the file.

Delimiter

The delimiter separating the different fields.

Line ending

The character that is used to denote the end of a line and the start of a new one.

Quote character

The quote character used in case fields are wrapped within quotes.

Load type

The load type for the table.

Note:

If you select Incremental as the Load type, you must specify additional settings to configure incremental load for the table.

Incremental data load

With a full load, all data is extracted from the source and loaded into the target system, regardless of whether the data has changed or remained the same since the last load. An incremental load only extracts data that has changed (added or updated) since the last load. An incremental loads is typically faster and less resource-intensive than a full load, especially when dealing with large volumes of data where only a small subset may change frequently.

To enable incremental data load for a table, you must set the Load type for the table to Incremental. Incremental load requires a unique identifier to ensure that the data is loaded correctly, and a field (such as a timestamp or a version) to track changes in the source data.

The following table describes the additional settings required for Incremental load.

Setting

Description

Primary keys

The primary key field or fields that uniquely identify each record in the data source.

Timestamp field

The field that is used to track when each record was last updated or added.

Timestamp format

The format of the timestamp value used in the Timestamp field.

Important:

The extraction method used for loading data may require additional or specific configuration for incremental extraction.

Check out Loading data using CData Sync for more information on how to set up incremental extraction for CData Sync.

Check out Loading data using DataUploader for more information on how to set up DataUploader for incremental extraction.

Attention: If you switch back the Load type to Full, make sure to configure the extraction method used accordingly.

Mapping and configuring input fields

When you finished configuring the input tables, you can map and configure the input fields for the tables.

  1. Select Next in the Tables page. The Fields page is displayed.

For each table the fields from the source file are automatically detected and mapped to the corresponding fields in the target table.

Mapping input fields

Your input data must meet the data model defined for your app. If needed, you can map a Source field to a different Target field.

Configuring input fields

The settings for the target input fields are automatically detected and you just need to check them.

Follow these steps to edit the settings for an input field.

  1. Locate the field you want to configure and select the Edit field icon to open the Edit field panel for the selected field.

  2. Edit the settings as desired and select Save.

The following table describes the table settings.

Setting

Description

Name

The name of the field.

Note:

Name is a mandatory field.

Type

The data type of the field.

  • Text

  • Integer

  • Decimal

  • Boolean

  • Date

  • Datetime

Note:

Depending on field type you must specify parse settings to configure the field.

Mandatory

Option to define the field as mandatory.

If selected, the field is required when publishing or importing the process app. An error is thrown then if the field is missing. If not selected, the field is considered optional. When it is missing, the field will be added with NULL values, such that subsequent SQL queries will not fail.

Unique

Option to define the field value have a distinct or unique value for each record.

Not NULL

Option to define that the field must have a value for each record. The field cannot be left empty or filled with a NULL value.

Parse settings for field types

The following table describes the available parse settings for the different field types.

Field type

Parse settings

Integer

Thousand separator

  • None

  • Dot (.)

  • Comma (,)

Decimal

  • Decimal separator

    • Dot (.)

    • Comma (,)

  • Thousand separator

    • None

    • Dot (.)

    • Comma (,)

Boolean

  • True value:

    TRUE or 1
  • False value

    FALSE or 0
Note:

True value and False value are mandatory settings and must be different.

Date

Date format

Datetime

Date time format

Adjusting existing process apps to utilize the Manage input data screen

Introduction

Note:

Although existing process apps remain fully functional, you can adjust process apps to utilize the Manage input data screen in Process Mining. With the Manage input data screen, you can easily add new input tables and input fields. Therefore, it is recommended to adjust process apps for which the structure of the input data is subject to change. If you do not anticipate many changes to the structure of the input data, you might want to consider not adjusting the app.

If you want to start using the Manage input data screen to load the tables and fields for an existing process app, you need to manually perform the steps described on this page.

Prerequisites

Before you start adjusting your app:

  1. Make sure that all changes to the process app are published.

  2. Export or clone the process app to save a backup.

Steps

Tip:
The input queries are the SQL files that refer to a table defined in the sources section. These models tend to be located in the folder 1_input.

Follow these steps for each of the input queries.

  1. Make sure that the input SQL file only contains renames and type casts. Move other logic (filtering, derived columns, etc.) to subsequent SQL files.

    The following illustration shows an example Event_log_input.sql file that only contains renames and type casts.


  2. Go to Manage input data and adjust the field properties for each field. Refer to Mapping and configuring input fields for details.

    1. Set the data type and renames using the Manage input data settings option in Data transformations.

    2. Check the Field attributes (Mandatory, Unique, not NULL) in the Field properties panel and make sure they are set correctly.



  3. Check all the SQL files for references to the input table: {{ ref('table_name') }}, and update these references to use the new table:
    Replace every occurence of {{ ref('table_name') }} with {{ source('sources', 'source_table_name') }}. For example: replace {{ ref('Event_log_input') }} which pointed to your SQL files with {{ source('sources', 'Event_log_raw') }} to point directly at the source table.


    Note:

    These references can occur in any SQL file.

  4. Since the original input SQL file is not used anymore, you can now safely delete it.

Important:

If your transformations contain a format setting that is not yet available in Manage input data settings, the type casting should remain in the dbt transformations. For example, SAP dates are not available (YYYYMMDD).

Troubleshooting

Question

The file that I uploaded before cannot be correctly loaded.

Possible solution

Inspect the encoding and line endings of the files to check if they match with the table setttings. Refer to Mapping and configuring input tables for more information.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2025 UiPath. All rights reserved.