- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Delete a source
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Introduction to Refine
The third phase, and the final step of the training process, is called ‘Refine’. The purpose of this stage is to understand how your model is performing and refine it until it performs as required. This involves improving specific labels that are not performing as expected, ensuring you have captured all of the relevant label concepts, and making sure your training data is a balanced representation of the dataset as a whole.
The platform is designed to be completely transparent to users when it comes to model performance, and very flexible when it comes to improving performance in areas that require it. For any use case, you want to be confident that your model captures an accurate representation of what's in your dataset, and this phase of the training helps ensure that you can be.
This section of the Knowledge Base will cover in detail the steps outlined below, but will begin with detailed explanations of precision and recall, how Validation works, and how to understand the different aspects of model performance.
Key steps
Review Model Rating - this step is about checking your Model Rating in Validation and seeing where the platform thinks there may be performance issues with your model, as well as guidance on how to address them. This section includes detail about understanding and improving model performance.
Refine label performance - this step is about taking actions, recommended by the platform, to improve the performance of your labels. These include using the 'Check label' and 'Missed label' training modes, which help you address potential inconsistencies in your annotating, as well as 'Teach label' mode (covered in more detail in the Explore phase here)
Increase coverage - this step helps ensure that as much of your dataset as possible is covered by meaningful label predictions.
Improve balance - this step is about ensuring that your training data is a balanced representation of the dataset as a whole. Improving the balance in the dataset helps to reduce annotating bias and increase the reliability of predictions made.