- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Training using Check label and Missed label
Introduction
Using the Check label and Missed label training modes is the part of the Refine phase where you try to identify any inconsistencies or missed labels in the messages that have already been reviewed. This is different to the Teach label step, which focuses on unreviewed messages that have predictions made by the platform, rather than assigned labels.
Check label shows you messages where the platform thinks the selected label may have been misapplied, that is, it potentially should not have been applied.
Missed label shows you messages that the platform thinks may be missing the selected label, that is, it potentially should have been applied but was not. In this case, the selected label will typically appear as a suggestion, as shown in the following image.
The suggestions from the platform in either mode are not necessarily correct, these are just the instances where the platform is unsure based on the training that's been completed so far. You can choose to ignore them if you disagree with the platform's suggestions after reviewing them.
Using these training modes is a very effective way of finding occurrences where the user may have not been consistent in applying labels. By using them you are able to correct these occasions and therefore improve the performance of the label.
When to use Check label and Missed label
The simplest answer of when to use either training mode is when they are one of the recommended actions in the Model Rating section or specific label view in the Validation page. For more details, check Understanding and improving model performance.
As a rule of thumb, any label that has a significant number of pinned examples but has low average precision, which can be indicated by red label warnings in the Validation page or in the label filter bars, will likely benefit from some corrective training in either Check label and Missed label mode.
When validating the performance of a model, the platform will determine whether it thinks a label has often been applied incorrectly, or where it thinks it's been regularly missed, and will prioritise whichever corrective action it thinks would be most beneficial for improving a label's performance.
Missed label is also a very useful tool if you have added a new label to an existing taxonomy with lots of reviewed examples. Once you've provided some initial examples for the new label concept, Missed label can quickly help you identify any examples in the previously reviewed messages where it should also apply. For more details, check Adding new labels to existing taxonomies.
Using Check label and Missed label
To reach either of these training modes, there are the following main options:
-
If it is a recommended action in Validation for a label, the action card acts as a link that takes you directly to that training mode for the selected label.
- Alternatively, you can select either training mode from the dropdown menu at the top of the page in Explore, and then select a label to sort by. You can find an example in the previous image.
In each mode, the platform will show you up to 20 examples per page of reviewed messages where it thinks the selected label may have been applied incorrectly, Check label, or may be missing the selected label, Missed label.
Check label
Missed label
In Missed label, review each of the examples on the page to see whether the selected label has in fact been missed. If it has, select the label suggestion, as shown in the previous image, to apply the label. If it has not, ignore the suggestion and move on.
Just because the platform is suggesting a label on a reviewed message, does not mean the model considers it to be a prediction, nor will it count towards any statistics on the number of labels in a dataset. If a suggestion is wrong, you can just ignore it.
Adding labels that have been missed can therefore have a major impact on the performance of a label, by ensuring that the model has correct and consistent examples from which to make predictions for that label.