- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Improving Balance and using Rebalance
Balance: Introduction and importance
The Balance rating presented in the Model Rating in Validation is a reflection of how balanced the reviewed data (i.e. the training data) in a dataset is, when compared to the dataset as a whole.
It takes into account a number of contributing factors, including:
- The similarity of the reviewed data to the unreviewed data, shown as a percentage score.
- The proportion of reviewed data that has been reviewed through random sampling, that is, Shuffle mode.
- The proportion of data that has been reviewed using Rebalance.
- The proportion of data that has been reviewed whilst using Text search.
It's important that the proportion of data reviewed through random sampling is high (ideally 20%+) and the proportion of reviewed data annotated using search is low.
The balance rating is most heavily influenced, however, by the similarity score that measures the similarity of the unreviewed data to the reviewed data.
This similarity score is calculated by a proprietary annotating bias model that compares the reviewed and unreviewed data to ensure that the annotated data is representative of the whole dataset. If the data is not representative and has been annotated in a biased manner, model performance measures can be misleading and potentially unreliable.
Annotating bias in the platform is typically the result of an imbalance of the training modes used to assign labels, particularly if too much 'text search' is used and not enough 'Shuffle' mode. It can still occur, however, even if a high proportion of 'Shuffle' mode is used. Training specific labels in modes like 'Teach label' can naturally lead to a slight imbalance in the reviewed data. The platform helps you identify when this happens and helps you address it in a quick and effective way.
Rebalance: Introduction and usage
Rebalance is a training mode that helps to reduce the potential imbalances in how a model has been annotated, i.e. annotating bias, which mean that the reviewed data is a not as representative of the whole dataset as it could be.
The Rebalance training mode shows messages that are underrepresented in the reviewed set.
Annotating the messages (as you would in any other training mode) presented in this mode will help address imbalances in the training data and improve the model's balance score.
If you find that you have a high similarity score but the Balance rating is still low, this is likely because you have not annotated enough of the training data in Shuffle mode. If this is the case, the platform will suggest annotating a random selection of messages as the priority recommended action. Training in this mode gives the platform additional confidence that the dataset has not been annotated in a biased manner and that the training data is a representative sample.
How much Rebalance to use
You should continue to use Rebalance iteratively to improve the similarity score for your model, which will in turn increase your Balance rating.
Once this reaches a Good rating in Validation, it is up to you how much more you would like to increase the similarity score before stopping training in Rebalance.
You can aim to optimise this rating as much as possible, but continued training will always be a case of diminishing returns. A Good rating should typically be considered an acceptable level of performance for a good model.