- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Delete a source
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Overview
- Extraction field type filtering
- Generating your extractions
- Validate and annotate generated extractions
- Best practices and considerations
- Understanding validation on extractions and extraction performance
- FAQ
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Understanding validation on extractions and extraction performance
The Validation page shows an overview of extractions performance, and helps you drill down into the individual performance of each extraction.
You can access the Validation page from the Extractions tab within Validation, and is only available if you have extraction fields defined on your dataset.
The default page of the Extractions Validation page is the ‘All’ overview page, which provides the following summary stats on the overall performance of extractions in the dataset:
- Extractions Mean F1 score
- Extractions Mean Precision
- Extractions Mean Recall
Click into individual labels to see the performance of individual extractions, i.e. the label and its associated extraction fields.
For each extraction, you can see the following values:
- F1 score
- Precision
- Recall
For all of the extraction fields for the label, you can see the following values:
- Average F1 score
- Average precision
- Average recall
For the individual extraction fields, you can see the following values:
- F1 score
- Precision
- Recall
How the confidence levels work varies depending on the underlying LLM model that you use.
The Preview LLM does not have confidence levels on its predictions. The Preview LLM returns whether a label or field is a prediction (Yes = 1), or not (No = 0).
As a result, there is no concept of different confidence thresholds.
This section describes the outputs of the get stream results activity. Check the Communications Mining dispatcher framework page for more details.
To automate with Generative extraction, it is important to understand the contents of the outputs of your extractions.
Occurrence confidence: Refers to how confident the model is around the number of instances a request might occur on a message (i.e.- how many times an extraction might occur).
As an example: To process a statement of accounts into a downstream system, you always need an Account ID, PO number, the payment amount, and the due date.
Check below the occurrence confidence example. It shows how the model can confidently identify that there are 2 potential occurrences where you need to facilitate this downstream process.
Extraction confidence is the model's confidence about its predictions. This includes how accurate it thinks it was in predicting a label's instance and its related fields. It also includes the model's confidence in correctly predicting if a field is missing.
Consider the same example as before. To process a statement of accounts into a downstream system, you always need an Account ID, PO number, the payment amount, and the due date.
However this time, the PO number is not present on the message, or the due date (only the start date).
The extraction confidence from this example is the model's confidence about identifying if the values for each field associated with the label are present. It also includes the model's confidence in correctly predicting if a field is missing.
In this case here, you don’t have all the fields you need, to be able to fully extract all the required fields.
Check below an example output of what the get stream response activity returns.
Stream refers to the threshold you set in Communications Mining, and if the message meets this threshold.
Instead of filtering out predictions based on thresholds, this route returns which prediction confidence met the thresholds.
In other words, if your thresholds were met, stream is returned. If not, then this value is empty.
Additionally, where there are multiple extractions, it is conditioned on the extractions before it.
For labels without extraction fields, the occurrence confidence is equivalent to the label confidence that you can see in the UI.