- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Precision
Precision measures the proportion of the predictions the model makes that were actually correct. This means that it identifies what proportion were true positives of all the positive predictions that the model made.
Precision = true positives / (true positives + false positives)
For example, for every 100 messages in a dataset predicted as having the Request for information label, the precision is the percentage of times that the Request for information was correctly predicted out of the total times it was predicted.
A 95% precision would mean that for every 100 messages predicted as having a specific label, 95 would be correctly annotated, and 5 would be wrongly annotated, which means that they should not have been annotated with that label.
For a more detailed explanation on how precision works, check Precision and recall explained.
The AP score for an individual label is calculated as the average of all the precision scores at each recall value, between 0 and 100%, for that label.
Essentially, the average precision measures how well the model performs across all confidence thresholds for that label.
MAP is one of the most useful measures of overall model performance and is an easy way to compare different model versions against one another.
The MAP score takes the mean of the average precision score for every label in your taxonomy that has at least 20 examples in the training set used in Validation.
Typically, the higher the MAP score, the better the model is performing overall, although, this is not the only factor that should be considered when understanding how healthy a model is. It is also important to know that your model is unbiased and has high coverage.