- Release notes
- Before you begin
- Managing access
- Getting started
- Integrations
- Working with process apps
- Working with dashboards and charts
- Working with process graphs
- Working with Discover process models and Import BPMN models
- Showing or hiding the menu
- Context information
- Export
- Filters
- Sending automation ideas to UiPath® Automation Hub
- Tags
- Due dates
- Compare
- Conformance checking
- Root cause analysis
- Simulating automation potential
- Starting a Task Mining project from Process Mining
- Triggering an automation from a process app
- Viewing Process data
- Creating apps
- Loading data
- Transforming data
- Customizing dashboards
- Publishing process apps
- App templates
- Notifications
- Additional resources

Process Mining
Using LLM functions in data transformations
LLM functions enable you to process unstructured text into categorical output for aggregated analysis in your dashboards. Using LLM functions eliminates the need for complex regex in SQL, making it easier to configure and adapt your transformations based on new data.
Refer to the official Snowflake documentation on Cortex AISQL (including LLM functions) for more information about using LLM functions.
Example use cases of LLM functions in data transformations are:
AI_CLASSIFY
function for classification of data. Refer to the official Snowflake documentation on AI_CLASSIFY for more information.ENTITY_SENTIMENT
function for sentiment analysis. Refer to the official Snowflake documentation on ENTITY_SENTIMENT for more information.
AI_CLASSIFY
function in a process mining context, including examples.
Processes can consist of many different activities, some of which may be very similar and can be mapped to higher-level categories. This type of mapping reduces the number of process variants and enables analysis at a more abstract level.
For example, in a Purchase-to-Pay process, approval events can occur at different levels, such as “Approve purchase requisition”, “Approve order level 1”, “Manager approval”, etc. Each of these activities can be mapped to a generic “Approve” activity.
Another example is “Change” events, such as “Change price”, “Change delivery date”, or “Change supplier”. Mapping these to a single “Change” activity reduces the number of paths in the process and simplifies the process graph view.
AI_CLASSIFY
function for defining the high-level process.
select
{{ pm_utils.id() }} as "Event_ID",
Purchase_order_item_event_log."Purchase_order_item_ID",
Purchase_order_item_event_log."Event_end",
coalesce(
to_varchar(
AI_CLASSIFY(
Purchase_order_item_event_log."Activity",
['Create', 'Change', 'Approve', 'Complete', 'Cancel']):labels[0]),
'Not mapped') as "High_level_activity"
from {{ ref('Purchase_order_item_event_log') }} as Purchase_order_item_event_log
select
{{ pm_utils.id() }} as "Event_ID",
Purchase_order_item_event_log."Purchase_order_item_ID",
Purchase_order_item_event_log."Event_end",
coalesce(
to_varchar(
AI_CLASSIFY(
Purchase_order_item_event_log."Activity",
['Create', 'Change', 'Approve', 'Complete', 'Cancel']):labels[0]),
'Not mapped') as "High_level_activity"
from {{ ref('Purchase_order_item_event_log') }} as Purchase_order_item_event_log
As the first argument of this function, provide the original Activity column from your events table. The second argument should be a list of high-level activities to which the activities will be mapped. In this example the activities are mapped to “Create”, “Change”, “Approve”, “Complete”, or “Cancel”.
Steps to implement the high-level process analysis:
- Create a separate SQL file in your project, for example
High_level_events.sql
, and add the high-level process SQL logic to it. - Add the
High_level_events
to the data model and configure the relation. In this example, the high level events are connected to the purchase order items table based on thePurchase_order_item_ID
. - Add an additional process with the purchase order items as main object and the high-level events as the events for this process.
- When applying your changes to dashboards, you can create the process graph and other dashboards based on the high-level process.
The following illustration shows an example.
- The
AI_CLASSIFY
function returns values in the format{ “labels”: [“Create”] }
. The:labels
retrieves the value”Create”
and theto_varchar()
function removes the surrounding quotes. - When none of the categories seems to be a good match, the value that is generated by the
AI_CLASSIFY
function remainsnull
. To prevent these records from being excluded from the dataset, map thenull
values to a constant (e.g.,"Unmapped"
) to indicate that these activities were not classified.
Customer requests are a typical example of unstructured data. Each type of request requires a different next action. To analyze the different request types more effectively in dashboards, you can categorize them using LLMs—without the need for manual user intervention.
The following code block shows a SQL example on how requests can be categorized in “Feedback”, “Question”, or “Complaint”.
select
Requests_input."Request_ID"
Requests_input."Request",
to_varchar(
AI_CLASSIFY(
Requests_input."Request",
['Feedback', 'Question', 'Complain']):labels[0])
as "Request_classified"
from {{ ref('Requests_input') }} as Requests_input
select
Requests_input."Request_ID"
Requests_input."Request",
to_varchar(
AI_CLASSIFY(
Requests_input."Request",
['Feedback', 'Question', 'Complain']):labels[0])
as "Request_classified"
from {{ ref('Requests_input') }} as Requests_input
AI_CLASSIFY
function to only include the records relevant for your analysis.
You can add the classified request types to the app and dashboards to enable more aggregated analysis. This provides greater insight compared to using the original request text directly, which is often unique for each record and can lead to a large number of distinct values.
The following illustration shows an example of classification.
ENTITY_SENTIMENT
and SENTIMENT
functions in a process mining context, including examples.
You can apply sentiment analysis to unstructured text to classify whether the content is positive or negative. This type of analysis can be used, for example, to:
- Analyze feedback to improve products or services.
- Track sentiment trends over time for decision-making.
There are two type of sentiment analysis functions available.
- Use the
ENTITY_SENTIMENT
function for categorical results (e.g., “Positive”, “Negative”, “Neutral”, “Mixed”, or “Unknown”). By default, the input is analyzed for the overall sentiment of the text. The output is returned in the following format:{ "categories": [ { "name": "overall", "sentiment": "positive" } ] }
- Use the
SENTIMENT
function for numeric results (e.g., a sentiment score on a scale). TheSENTIMENT
function returns a value between -1 and 1, indicating the degree of negativity or positivity in the input text. You can use these numeric sentiment values in dashboard metrics to analyze sentiment across different levels of aggregation.
Refer to the official Snowflake documentation on ENTITY_SENTIMENT for more information.
The following code block shows a SQL example on using sentiment analysis for user feedback.
select
Feedback_input."Feedback_ID",
Feedback_input."Feedback",
to_varchar(
SNOWFLAKE.CORTEX.ENTITY_SENTIMENT(
Feedback_input."Feedback"):categories[0]:sentiment)
as "Sentiment_category",
SNOWFLAKE.CORTEX.SENTIMENT(
Feedback_input."Feedback")
as "Sentiment_value"
from {{ ref('Feedback_input') }} as Feedback_input
select
Feedback_input."Feedback_ID",
Feedback_input."Feedback",
to_varchar(
SNOWFLAKE.CORTEX.ENTITY_SENTIMENT(
Feedback_input."Feedback"):categories[0]:sentiment)
as "Sentiment_category",
SNOWFLAKE.CORTEX.SENTIMENT(
Feedback_input."Feedback")
as "Sentiment_value"
from {{ ref('Feedback_input') }} as Feedback_input
ENTITY_SENTIMENT
function. The output will be a sentiment value for each of the specified category.
category[0]
(which only selects the first category), modify the query to select the sentiment values for the specific categories of interest.
The following iullustration shows example results of user feedback sentiment anaysis.