- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Pinning and tagging a model version
- Deleting a pinned model
- Adding new labels to existing taxonomies
- Maintaining a model in production
- Model rollback
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Adding new labels to existing taxonomies
If you have a pre-existing mature taxonomy, with many reviewed messages, adding a new label requires some additional training to bring it in line with the rest of the labels in the taxonomy.
When adding a new label to a well-trained taxonomy, make sure to apply it to previously reviewed messages if the label is relevant to them. Otherwise, the model will have effectively been taught that the new label should not apply to them and will struggle to predict the new label confidently.
The more reviewed examples there are in the dataset, the more training this will require when adding a new label. This is unless it is an entirely new concept that you will not find in older data, but will find in much more recent data.
- Create the new label when you find an example where it should apply.
-
Select Missed label to find more messages where the platform determines the new label should have been applied. For more details, check Finding messages with a Missed label.
- Once the model has had time to retrain and calculate the new validation statistics, check how the new label performs in the Validation page.
- Check if more training is required.
- Search for key terms or phrases using the search function in Discover to find similar instances. This way you apply the label in bulk if there are lots of similar examples in the search results.
- Alternatively to the first step, search for key terms or phrases in Explore. This is potentially a better method as you can filter to Reviewed messages, and searching in Explore returns an approximate count of the number of messages that match your search terms.
- Select labels that you think might often appear alongside your new label, and review the pinned examples for that label to find examples where your new label should be applied.
- Once you have a few pinned examples, check if it starts to get predicted in Label mode. If it does, add more examples using this mode.
- If you are annotating in a sentiment enabled dataset, and your new label is either positive or negative, you can also choose between positive and negative sentiment when checking at reviewed examples. Although, you cannot combine text search with the Reviewed and a Sentiment filter.
- Once you have annotated quite a few examples using the previous methods, and the model has had time to retrain, use the Missed label functionality in Explore by selecting your label and then select Missed label from the dropdown menu.
- This will show you reviewed messages where the model determines that the selected label may have been missed in the previously reviewed examples.
- In these instances, the model will show the label as a suggestion as shown in the image example.
- Apply the label to all of the messages that the model correctly thinks the label should have been applied to.
- Keep training in this page until you have annotated all of the correct examples, and this mode no longer shows you examples where the label should actually apply.