- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Introduction to Refine
The third phase, and the final step of the training process, is called Refine. The purpose of this stage is to understand how your model is performing and refine it until it performs as required. This involves improving specific labels that are not performing as expected, ensuring you have captured all of the relevant label concepts, and making sure your training data is a balanced representation of the dataset as a whole.
The platform is designed to be completely transparent to users when it comes to model performance, and very flexible when it comes to improving performance in areas that require it. For any use case, you want to be confident that your model captures an accurate representation of what is in your dataset, and this phase of the training helps ensure that you can be.
This section of the Knowledge Base will cover in detail the steps outlined below, but will begin with detailed explanations of precision and recall, how Validation works, and how to understand the different aspects of model performance.
-
Review Model Rating - This step is about checking your Model Rating in Validation and identifying where the platform thinks there may be performance issues with your model, as well as guidance on how to address them. This section includes detail about understanding and improving model performance.
-
Refine label performance - This step is about taking actions, recommended by the platform, to improve the performance of your labels. These include using the Check label and Missed label training modes, which help you address potential inconsistencies in your annotating, as well as Teach label mode. For more details, check Training using Teach Label (Explore).
-
Increase coverage - Helps ensure that as much of your dataset as possible is covered by meaningful label predictions.
-
Improve balance - This step is about ensuring that your training data is a balanced representation of the dataset as a whole. Improving the balance in the dataset helps to reduce annotating bias and increase the reliability of predictions made.