UiPath Documentation
process-mining
2024.10
false
  • Release notes
    • 2024.10.8
    • 2024.10.7
    • 2024.10.6
    • 2024.10.5
    • 2024.10.4
    • 2024.10.3
    • 2024.10.2
    • 2024.10.1
    • 2024.10
  • Before you begin
    • Installing Automation Suite
    • Process Mining configuration checklist
    • System requirements
  • Getting started
    • Introduction to Process Mining
    • Process Mining
    • Architecture overview
    • Process Mining portal
    • Uploading app templates
    • Migrating apps for use in Process Mining
    • Managing access
      • Enabling Process Mining
      • Setting up the users
      • Managing access for process apps
      • Data capacity
  • Integrations
    • Setting up Automation integration
  • Working with process apps
    • Working with dashboards and charts
    • Working with process graphs
    • Working with Discover process models and Import BPMN models
    • Showing or hiding the menu
    • Context information
    • Export
    • Filters
    • Sending automation ideas to UiPath® Automation Hub
    • Tags
    • Due dates
    • Compare
    • Conformance checking
    • Root cause analysis
    • Simulating automation potential
    • Triggering an automation from a process app
    • Viewing Process data
  • Creating apps
    • App Templates
    • Create new app wizard
    • Cloning an app
    • Exporting and importing process apps
    • Editing dashboards
    • Managing app permissions
    • Deleting an app
  • Loading data
    • Uploading data
    • Retrieving the SQL Server database parameters
    • Setting up a SQL Server account for data upload using an extractor
    • Loading data using CData Sync
      • Installing CData Sync
      • Create a source connection
      • Create a destination connection
      • Create a job
      • Run the job
      • Incremental extraction
      • Troubleshooting CData Sync
    • Loading data using Theobald Xtract Universal
      • Setting up Theobald Xtract Universal
      • Importing the template extractions
      • Configuring the source
      • Configuring the destination
      • Configuring the extraction script
      • Running the extraction script
      • Troubleshooting
    • Viewing logs
    • Optimizing an app
    • Scheduling Data Runs
  • Transforming data
    • Data transformations editor
      • Adding fields
      • Adding tables
      • Designing an event log
    • Input data
      • Adding input tables
      • Deleting input tables
      • Defining new input tables
    • Editing data transformations
      • Structure of transformations
      • Tips for writing SQL
      • Exporting and importing transformations
      • Viewing the data run logs
      • Merging event logs
      • Configuring Tags
      • Configuring Due dates
      • Configuring fields for Automation potential
      • Making the transformations available in dashboards
    • Data models
      • Data model requirements
  • Customizing dashboards
    • Introduction to dashboards
    • Working with the dashboard editor
      • Uploading a development dataset
    • Creating dashboards
    • Dashboards
    • Charts
      • Bar charts
      • Stacked bar charts
      • Distribution charts
      • Table charts
      • Pivot table charts
      • Line charts
      • Process graphs
      • KPI trends lists
      • KPI values lists
      • Pie charts
      • Donut charts
    • Editing app settings
      • Setting data restrictions for a process app
    • Data manager
      • Tables
      • Fields and metrics
      • Expressions
        • Reference fields
        • Constants
        • Arithmetic
        • Aggregates
        • Compare
        • Logical
        • Text
    • Automation manager
    • Process manager
  • Publishing Dashboards
    • Publishing process apps
  • App templates
    • Event log and Custom process app templates
      • Overview of generic menus and dashboards
      • Menu Overview
      • Menu Analysis
      • Event log input fields
      • Custom process input fields
      • Configuring CData Sync for Event log or Custom process
    • Purchase-to-Pay app template
      • Overview of menus and dashboards
      • Menu Summary
      • Menu Analysis
      • Menu Efficiency
      • Menu Compliance
      • Purchase-to-Pay input fields
      • Configuring CData Sync for Purchase-to-Pay
    • Order-to-Cash app template
      • Overview of menus and dashboards
      • Menu Summary
      • Menu Analysis
      • Menu Efficiency
      • Order-to-Cash input fields
      • Configuring CData Sync for Order-to-Cash
  • Notifications
    • My notifications
  • Additional resources
    • Out-of-the-box Tags and Due dates
    • Transformations
      • Custom throughput time metrics
      • SQL differences between Snowflake and SQL Server
      • Configuration settings for loading input data
    • Loading data using DataBridgeAgent
      • System requirements
      • Configuring DataBridgeAgent
        • Configuring CData Sync
      • Adding a custom connector to DataBridgeAgent
      • Using DataBridgeAgent with SAP Connector for Purchase-to-Pay Discovery Accelerator
      • Using DataBridgeAgent with SAP Connector for Order-to-Cash Discovery Accelerator
    • Extending the SAP Ariba extraction tool
    • Performance characteristics
    • Basic troubleshooting guide
      • Data transformations
      • Uploading data
      • CData Sync
UiPath logo, featuring letters U and I in white

Process Mining

Last updated Mar 26, 2026

Transformations

Folder structure

Important: The information on this page is only applicable for app templates that have a due dates configuration files and a seeds\ folder.
The transformations of a process app consist of a dbt project. The following table describes the contents of a dbt project folder.

Folder/file

Contains

dbt_packages\

the pm_utils package and its macros.

macros\

optional folder for custom macros

models\

.sql files that define the transformations.

models\schema\

.yml files that define tests on the data.

seeds

.csv files with configuration settings.

dbt_project.yml

the settings of the dbtproject.

Note:

The Event log and Custom process app templates have a simplified data transformations structure. Process apps created with these app templates do not have this folder structure.

dbt_project.yml

The dbt_project.yml file contains settings of the dbt project which defines your transformations. The vars section contains variables that are used in the transformations.

Date/time format

Each app template contains variables that determine the format for parsing date/time data. These variables have to be adjusted if the input data has a different date/time format than expected.

Data transformations

The data transformations are defined in .sql files in the models\ directory. The data transformations are organized in a standard set of sub directories.

Check out Structure of transformations for more information.

The .sql files are written in Jinja SQL, which allows you to insert Jinja statements inside plain SQL queries. When dbt runs all .sql files, each .sql file results in a new view or table in the database.
Typically, the .sql files have the following structure: Select * from {{ ref('Table_A') }} Table_A.

The following code shows an example SQL query.

select
    tableA."Field_1" as "Alias_1",
    tableA."Field_2",
    tableA."Field_3"
from {{ ref('tableA') }} as tableAselect
    tableA."Field_1" as "Alias_1",
    tableA."Field_2",
    tableA."Field_3"
from {{ ref('tableA') }} as tableA
Note:
In some cases, for process apps created with earlier versions of the app templates, the .sql files have the following structure:
  1. With statements: One or more with statements to include the required sub tables.

    • {{ ref(‘My_table) }} refers to table defined by another .sql file.
    • {{ source(var("schema_sources"), 'My_table') }} refers to an input table.
  2. Main query: The query that defines the new table.
  3. Final query: Typically a query like Select * from table is used at the end. This makes it easy to make sub-selections while debugging.

For more tips on how to write transformations effectively, refer to Tips for writing SQL.

Adding source tables

To add a new source table to the dbt project, it must be listed in models\schema\sources.yml. This way, other models can refer to it by using {{ source(var("schema_sources"), 'My_table') }}. The following illustration shows an example.


Important: Each new source table must be listed in sources.yml.

For more detailed information, refer to the official dbt documentation on Sources.

Data output

The data transformations must output the data model that is required by the corresponding app; each expected table and field must be present.

If you want to add new fields to you process app, you can add these fields in the transformations.

Macros

Macros make it easy to reuse common SQL constructions. For detailed information, refer to the official dbt documentation on Jinja macros.

pm_utils

The pm-utils package contains a set of macros that are typically used in Process Mining transformations. For more info about the pm_utils macros, check out ProcessMining-pm-utils.
The following illustration shows an example of Jinja code calling the pm_utils.optional() macro.


Seeds

Seeds are csv files that are used to add data tables to your transformations. For detailed information, refer to the official dbt documentation on jinja seeds.

In Process Mining, this is typically used to make it easy to configure mappings in your transformations.

After editing seed files, run the file by selecting Run file or Run all, to update the corresponding data table.

Activity Configuration: Defining activity order

The Activity_order field is used as a tie breaker when two events are happening on the same timestamp.
Option 1: SQL configuration
The following code shows an example of configuring Activity_order with a SQL CASE statement:
    case
            when tableA."Activity" = 'ActivityA'
                then 1
            when tableA."Activity" = 'ActivityB'
                then 2
            when tableA."Activity" = 'ActivityC'
                then 3
            when tableA."Activity" = 'ActivityD'
                then 4
    end as "Activity_order"    case
            when tableA."Activity" = 'ActivityA'
                then 1
            when tableA."Activity" = 'ActivityB'
                then 2
            when tableA."Activity" = 'ActivityC'
                then 3
            when tableA."Activity" = 'ActivityD'
                then 4
    end as "Activity_order"
Option 2: CSV seeds file
Instead of using a SQL CASE statement, you can define Activity_order using the activity_configuration.csv file .
The following illustration shows an example activity_configuration.csv file:


Recommendations
Most UiPath app templates come with predefined fields for activity configuration, which you can adapt to your specific business requirements. If predefined fields are unavailable or insufficient, you can always create custom fields using either SQL or the activity_configuration.csv seeds file, as outlined.

Tests

The models\schema\ folder contains a set of .yml files that define tests. These validate the structure and contents of the expected data. For detailed information, refer to the official dbt documentation on tests.
Note: When you edit transformations, make sure to update the tests accordingly. The tests can be removed if desired.

Dbt projects

Data transformations are used to transform input data into data suitable for Process Mining. The transformations in Process Mining are written as dbt projects.

This pages gives an introduction to dbt. For more detailed information, refer to the official dbt documentation.

pm-utils package

Process Mining app templates come with a dbt package called pm_utils. This pm-utils package contains utility functions and macros for Process Mining dbt projects. For more info about the pm_utils , refer to ProcessMining-pm-utils.

Updating the pm-utils version used for your app template

UiPath® constantly improves the pm-utils package by adding new functions.
When a new version of the pm-utils package is released, you are advised to update the version used in your transformations, to make sure that you make use of the latest functions and macros of the pm-utils package.
You find the version number of the latest version of the pm-utils package in the Releases panel of the ProcessMining-pm-utils.
Follow these steps to update the pm-utils version in your transformations.
  1. Download the source code (zip) from the release of pm-utils.
  2. Extract the zip file and rename to folder to pm_utils.
  3. Export transformations from the inline Data transformations editor and extract the files.

  4. Replace the pm_utils folder from the exported transformations with the new pm_utils folder.

  5. Zip the contents of the transformations again and import them in the Data transformations editor.

Was this page helpful?

Connect

Need help? Support

Want to learn? UiPath Academy

Have questions? UiPath Forum

Stay updated