- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Overview
- Configuring Fields
- Extraction field type filtering
- Generating your extractions
- Validating and annotating generated extractions
- Best practices and considerations
- Understanding validation on extractions and extraction performance
- Frequently asked questions (FAQs)
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Understanding validation on extractions and extraction performance
The Validation page shows an overview of extractions performance, and helps you drill down into the individual performance of each extraction.
You can access the validation details in the Extractions tab from Validation. The Exractions tab is only available if you have extraction fields defined on your dataset.
The default page of the Extractions tab from Validation is an overview using the All filter. This page provides the following summary statistics on the overall performance of extractions in the dataset:
- Extractions Mean F1 score
- Extractions Mean Precision
- Extractions Mean Recall
Select individual labels to view the performance of individual extractions, that is, the label and its associated extraction fields.
For each extraction, you can notice the following values:
- F1 score
- Precision
- Recall
For all of the extraction fields for the label, you can notice the following values:
- Average F1 score
- Average precision
- Average recall
For the individual extraction fields, you can notice the following values:
- F1 score
- Precision
- Recall
How the confidence levels work varies depending on the underlying LLM model that you use.
If you use the CommPath LLM, themodel assigns a set of confidence scores for each prediction (%).
CommPath calculates and returns the following:
-
Occurrence confidence: The likelihood that the detected occurrence corresponds to the assigned label.
-
Extraction confidence: The confidence in the correctness of the extracted content.
These confidence values enable downstream automations to filter out extractions with confidence levels below a set label threshold. If you set an appropriate threshold, you can ensure that only predictions that meet a desired confidence level are used in workflows.
The Preview LLM for Generative Extraction provides a single label occurrence confidence value for each extraction, replacing both the occurrence confidence and the extraction confidence. This approach differs from CommPath, which returns separate confidence values for each extraction.
Returning the label confidence helps filtering out extractions downstream if needed, allowing users to improve the precision of some results.
This section describes the outputs of the Get Stream Results activity. For more details, check Communications Mining™ dispatcher framework.
To automate with Generative Extraction, you must first understand the contents of the outputs of your extractions.
Occurrence confidence refers to how confident the model is around the number of instances a request might occur on a message, that is, how many times an extraction might occur.
As an example, to process a statement of accounts into a downstream system, you always need an Account ID, PO number, the payment amount, and the due date.
Check the occurrence confidence example in the following image. It shows how the model can confidently identify that there are two potential occurrences where you need to facilitate this downstream process.
Extraction confidence is the confidence of the model about its predictions. This includes how accurate it thinks it was in predicting the instance of a label and its related fields. It also includes the confidence of the model in correctly predicting if a field is missing.
Consider the same example as before. To process a statement of accounts into a downstream system, you always need an Account ID, PO number, the payment amount, and the due date.
However this time, the PO number is not present on the message, or the due date, only the start date.
The extraction confidence from this example is the model's confidence about identifying if the values for each field associated with the label are present. It also includes the model's confidence in correctly predicting if a field is missing.
In this case here, you do not have all the fields you need, to be able to fully extract all the required fields.
Stream refers to the threshold you set in Communications Mining, and if the message meets this threshold.
Instead of filtering predictions based on thresholds, this route returns which prediction confidence met the thresholds.
In other words, if your thresholds were met, stream is returned. If not, then this value is empty.
- The previous image contains a sample snippet to explain the different components, not the full output of a Generative Extraction response. Additionally, where there are multiple extractions, it is conditioned on the extractions before it. For labels without extraction fields, the occurrence confidence is equivalent to the label confidence that you can view in the user interface.
- If the model has failed to successfully extract all of the fields in a message because there are too many fields, it will return an extraction in the stream response that has an occurrence confidence and an extraction confidence with values of zero.