- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Labels (predictions, confidence levels, label hierarchy, and label sentiment)
A label is a structured summary of an intent or concept expressed within a message. Multiple labels often summarize a message, which means that a label is not a mutually exclusive classification of the message.
As an example, in a dataset monitoring the customer experience, we might create a label called Incorrect Invoice Notification, which describes when a customer informs the business that they have received what they believe is an incorrect invoice.
You initially create labels by applying one to a relevant message. You can continue to apply them to build up training examples for the model, and the platform will then start to automatically predict the label across the dataset where it is relevant.
A label that you applied to a message is considered pinned, whereas the labels that the platform assigns to messages are known as label predictions.
To learn about reviewed and unreviewed messages, check Annotated and unannotated messages.
%
for that label prediction. The higher the confidence level, the more confident the platform is that the label applies.
The confidence level that the platform has in the predicted labels shades the labels. The more opaque the label, the higher the confidence of the platform is that the label applies.
You can organize labels in a hierarchical structure to help you organize and train new concepts more quickly.
This hierarchy can be in the following format: [Parent label] > [Branch label 1] > [Branch label n] > [Child label]
>
separates, that form subsets of the previous labels in the hierarchy.
Any time a child label or branch label is pinned or predicted, the model considers the previous levels in the hierarchy to have been pinned or predicted as well. Predictions for parent labels will typically have higher confidence levels than the lower levels of the hierarchy, as they are often easier to identify.
For more details about label hierarchies, check Taxonomies.
For datasets with sentiment analysis enabled, a green or red colour indicates a positive or negative sentiment for every label, both pinned and predicted.
Different levels of a label hierarchy can have different sentiment predictions. For example, a review could be overall positive about a Property, but could be negative about the Property > Location.