ixp
latest
false
UiPath logo, featuring letters U and I in white

Communications Mining user guide

Last updated Oct 6, 2025

Overview

Key steps

The Explore page has various training modes, and this phase focuses primarily on three of them:

  • Shuffle - Shows a random selection of messages for users to annotate. Make sure you complete a significant chunk of training in Shuffle, in order to create a training set of examples that is representative of the wider dataset.
  • Teach - Used for for unreviewed messages. As soon as the platform makes some reasonable predictions for a label, you can improve the ability of predicting the label for more varied examples by reviewing messages in the default Teach mode, which is for unreviewed messages. This will show you messages where the platform is unsure whether the selected label applies or not.
  • Low Confidence - Shows you messages that are not well covered by informative label predictions. These messages will have either no predictions or very low confidence predictions for labels that the platform understands to be informative.

This section will also cover training using Search in Explore, similar to training using Search in Discover.

Teach, for reviewed messages, is another training mode in Explore. For more details, check Refining models and using Validation.

Layout



The layout from the previously shown image is explained in the following table:

1Adjust the date range or period of messages shown.
2Add various other filters based on the metadata of the messages, e.g. score or sender.
3Add a general field filter.
4Toggle from all messages to either reviewed or unreviewed messages, also adjusts pinned vs predicted label counts.
5Add a label filter.
6Search for specific labels within your taxonomy.
7Add additional labels.
8Expand message's metadata.
9Refresh the current query.
10Switch between different training modes such as recent, shuffle, teach and low confidence, and select label to sort by.
11Search the dataset for messages containing specific words or phrases.
12Download all of the messages on this page or export the dataset with applied filters as a CSV file.

How much training to do for each label

The number of examples required to accurately predict each label can vary a lot depending on the breadth or specificity of a label concept.

It may be that a label is typically associated with very specific and easily identifiable words, phrases or intents, and the platform is able to predict it consistently with relatively few training examples. It could also that a label captures a broad topic with lots of different variations of language that would be associated with it, in which case it could require significantly more training examples to allow the platform to consistently identify instances where the label should apply.

The platform can often start making predictions for a label with as little as five examples, though in order to accurately estimate the performance of a label, that is, how well the platform is able to predict it, each label requires at least 25 examples.

When annotating in Explore, the little red dials next to each label indicate whether more examples are needed to accurately estimate the performance of the label. The dial starts to disappear as you provide more training examples and will disappear completely once you reach 25.



This does not mean that with 25 examples the platform will be able to accurately predict every label, but it will at least be able to validate how well it can predict each label and alert you if additional training is required.

During the Explore phase, make sure that you have provided at least 25 examples for all of the labels that you are interested in, using a combination of the steps mentioned previously, that is, mostly Shuffle, and Teach and Unreviewed.

During the Refine phase it may become clear that more training is required for certain labels to improve their performance. For more details, check Refining models and using Validation.

Label performance warnings

In Explore, once you reach 25 pinned examples for a label, you may notice one of the following label performance indicators in place of the training dial:

  • Grey is an indicator that the platform is calculating the performance of that label. This means that it will update to either disappear, or an amber or red circle once calculated.
  • Amber is an indicator that the label has slightly less than satisfactory performance and could be improved.
  • Red is an indicator that the label is performing poorly and needs additional training or corrective actions to improve it.
  • If there is no circle, the label is performing at a satisfactory level, though it may still need improving depending on the use case and desired accuracy levels.
  • To understand more about label performance and how to improve it, check Understanding and improving model performance.


Predicted label counts vs. pinned label counts

If you select the tick icon, as shown in the following images, at the top of the label filter bar to filter to reviewed messages, you will be shown the number of reviewed messages that have that label applied.

If you select the computer icon to filter to unreviewed messages, you will be shown the total number of predictions for that label, which includes the number of reviewed examples too.

In Explore, when neither reviewed or unreviewed is selected, the platform shows the total number of pinned messages for a label as default. In Reports, the default is to show the total predicted.

docs imagedocs image
Note: The predicted number is an aggregation of all the probabilities that the platform calculates for this label. For example, two messages with a confidence level of 50% would be counted as one predicted label.

Tips for using Explore

  • The model can make predictions with only a few annotated messages, though for it to make reliable predictions, you should annotate at a minimum of 25 messages per label. Some will require more than this, it will depend on the complexity of the data, the label and the consistency with which the labels have been applied
  • In Explore, you should also try and find messages where the model has predicted a label incorrectly. You should remove incorrect labels and apply correct ones. This process helps to prevent the model from making a similar incorrect prediction in future
Important: During this phase you will be applying a lot of labels, so make sure to adhere to the key annotating best practices of adding all labels that apply. You can do this by applying the labels consistently, and annotating what you can view in front of you.

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo
Trust and Security
© 2005-2025 UiPath. All rights reserved.