- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Delete a source
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Precision
Precision measures the proportion of the predictions made by the model that were actually correct. That is, of all the positive predictions that the model made, what proportion were true positives.
Precision = true positives / (true positives + false positives)
For example, for every 100 messages in a dataset predicted as having the ‘Request for information’ label, the precision is the percentage of times that the ‘Request for information’ was correctly predicted out of the total times it was predicted.
A 95% precision would mean that for every 100 messages predicted as having a specific label, 95 would be correctly annotated, and 5 would be wrongly annotated (i.e. they should not have been annotated with that label).
For a more detailed explanation on how precision works, please see here.
Average Precision (AP)
The average precision (AP) score for an individual label is calculated as the average of all the precision scores at each recall value (between 0 and 100%) for that label.
Essentially, the average precision measures how well the model performs across all confidence thresholds for that label.
Mean average precision (MAP)
Mean average precision (MAP) is one of the most useful measures of overall model performance and is an easy way to compare different model versions against one another.
The MAP score takes the mean of the average precision score for every label in your taxonomy that has at least 20 examples in the training set used in Validation.
Typically, the higher the MAP score, the better the model is performing overall, though this is not the only factor that should be considered when understanding how healthy a model is. It is also important to know that your model is unbiased and has high coverage.
Mean precision at recall