- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
How validation works
Within Validation, the platform evaluates the performance of both the label and general field models associated with a dataset.
For the label model specifically, it calculates an overall Model Rating by testing a number of different performance factors, including:
- How well it is able to predict each label in the taxonomy, using a sub-set of training data from within that dataset.
- How well covered the dataset as a whole is by informative label predictions.
- How balanced the training data is, in terms of how it has been assigned and how well it represents the dataset as a whole.
- a majority set of training data.
- a minority set of test data.
In the following image, the coloured dots represent the annotated messages within a dataset. This split is determined by the message ID when the messages are added to the dataset, and remains consistent throughout the life of the dataset.
The platform then trains itself using only the training set as training data.
Based on this training, it then tries to predict which labels should apply to the messages in the test set and evaluates the results for both precision and recall against the actual labels that were applied by a human user.
On top of this process, the platform also takes into account how labels were assigned, that is, which training modes were used when applying labels to understand whether they have been annotated in a biased, or balanced way.
Validation then publishes live statistics on the performance of the labels for the latest model version, but you can also view historic performance statistics for previously pinned model versions.
To understand how well your model covers your data, the platform looks at all of the unreviewed data in the dataset and the predictions that the platform has made for each of those unreviewed messages.
It then assesses the proportion of total messages that have at least one informative label predicted.
Informative labels' are those labels that the platform understands to be useful as standalone labels, by looking at how frequently they're assigned with other labels. Labels that are always assigned with another label. For example, parent labels that are never assigned on their own or Urgent if it's always assigned with another label, are down-weighted when the score is calculated.
When the platform assesses how balanced your model is, it's essentially looking for annotating bias that can cause an imbalance between the training data and the dataset as a whole.
To do this, it uses a annotating bias model that compares the reviewed and unreviewed data to ensure that the annotated data is representative of the whole dataset. If the data is not representative, model performance measures can be misleading and potentially unreliable.
Annotating bias is typically the result of an imbalance of the training modes used to assign labels, particularly if too much 'text search' is used and not enough 'Shuffle'.
The Rebalance training mode shows messages that are under-represented in the reviewed set. Annotating examples in this mode will help to quickly address any imbalances in the dataset.
Every time you complete some training within a dataset, the model updates and provides new predictions across every message. In parallel, it also re-evaluates the performance of the model. This means that by the time the new predictions are ready, new validation statistics should also be available (though one process can take longer than the other sometimes), including the latest .