- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
When to stop training your model
With the platform's comprehensive Validation capabilities, including the Model Rating functionality, understanding when to stop training your model is now relatively simple.
The required level of performance for your model will be up to your and your business, but the platform's Model Rating gives you a great sense of where your model is performance-wise, and how to improve it if needed.
A model with a score of 70 or above is classed as Good, while a score of 90 is required for a model to be classed as Excellent.
Whatever the use case, you are recommended to make sure you apply the following before you stop training:
- That your model at least has an overall score that provides a rating of Good as this means the platform considers the model to be relatively healthy overall.
- That each of the individual factors also has a rating of at least Good.
- That none of your important labels have red or amber performance warnings.
For an analytics-focused model, beyond the previously-listed factors, it should be at the model trainer's discretion how much they want to optimise their model's performance. The performance requirement can depend on a variety of factors, including the objectives of the use case and the capacity of the model trainer to continue training.
If you're creating a model that is intended to enable automations, it's recommended that your model should have an Excellent rating, and also that the model is tested on live data before being deployed into production.
Additional optional performance checks
While the Model Rating is a comprehensive performance assessment, you may want to complete some additional checks to ensure you're completely comfortable with the performance of your model.
If this is the case, here are some useful checks you can do with recommended actions. It's worth noting that if the platform thinks it's important for you to take any of these actions, it will also be recommending these in Validation.
Check | Process | Actions to take |
---|---|---|
The two-day period prediction review | Review predictions on 1-2 days worth of recent data: use the time filter and ‘recent’ in the drop down to pick 2 recent days worth of data. Review the predictions and make sure each message has a reasonably high confidence prediction. By reviewing predictions 1-2 days worth of data it should ensure all potential concepts are covered |
• If there are messages with no predictions or insufficient confidence then annotate them as normal • Then train more in Shuffle and Low Confidence |
Shuffle | Review predictions in Shuffle for at least 5 pages. Each message should have a label predicted with a reasonably high confidence |
• If there are messages with no predictions or insufficient confidence then annotate them as normal • Then train more in Shuffle and Low Confidence |
Low Confidence | The Low Confidence mode shows you messages that are not well covered by informative label predictions. These messages will have either no predictions or very low confidence predictions for labels that the platform understands to be informative. |
• If there are messages that have not been covered add a new label for them and train out as normal • Where you find a message for an existing label, apply it as normal |
Re-Discover | Returning to Discover can show you potential new clusters where the probability of any label applying is low. This should be used to ensure you have not missed any potential labels or to provide existing labels with more varied examples, in a similar way to Low Confidence. |
• If there are clusters with no predictions (or very low) then annotate the cluster with either a new label or an existing one if applicable • Train out any new label as normal |
Re-Discover
Re-Discover is a step that can be revisited at any time during the training process, but can also be useful when checking whether you have completed sufficient training.
This check essentially just entails going back to the Discover page on Cluster mode and reviewing the clusters in there to check their predictions and to see if Discover has found any clusters that may have been missed by your training.
As the clusters in Discover retrain after a significant amount of training has been completed in the platform (180 annotations) or a significant amount of data has been added to the dataset (1000 messages or 1%, whichever is greater, and at least 1 annotation), they should regularly update throughout the training process.
Discover tries to find clusters that are not well covered by label predictions. If there are clusters in Discover that should have certain labels predicted but don't, you know you need to do some more training for those labels. For more details on how to annotate clusters in Discover, check Training using clusters.
If your model is well trained, Discover will struggle to find clusters with low confidence or no predictions. If you notice that each of the clusters in Discover has reasonably high-confidence and correct predictions, this is a good indicator that your model covers the dataset well.