- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Delete a source
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Validate and annotate generated extractions
Give sufficient examples, so that the model can provide you with validation statistics. The validation statistics help you understand how well your extractions perform. Additionally, it allows you to fine-tune your extractions.
Review the results and:
- Accept the extraction(s) if they are all correct.
- Correct the extractions if there are any incorrect predictions.
- Mark extractions as missing if they are not present in the message.
- Configure any additional fields if any are missing that are required to enable end-to-end automation.
Why is fine-tuning important?
Fine-tuning allows you to use the annotations gathered to improve the performance of the extraction model.
It allows you to take the out-of-the-box model and enhance performance for your use cases.
When can you stop?
Stop once you provide at least 25 examples of label extractions for the model to use in its validation process. Check validation and see if the performance is sufficient, or whether more examples are needed.
# | Description |
1 | If all the field predictions are correct, selecting the Confirm button allows you to confirm that the annotations are correct, in bulk. |
2 | To add or modify any fields that should have been present in the message, select +, next to the general field or extraction field section. |
3 | Checking this box allows you to confirm that a field annotation is correct, on an extraction level. |
4 | This shows what data point was predicted for a given
field.
If the prediction is incorrect, selecting the x button allows you to adjust the field with the correct one. |
5 | This shows the position in the message a data point(s) are
predicted.
|
6 | To add or modify any fields, hover next to the + button, on the respective general field or extraction field section. |
7 | To expand the fields displayed for General Fields or specific Extraction Fields, select the dropdown button. |
The image below shows what an extraction looks like in its unconfirmed state. On the right pane, the extraction is marked as not confirmed, and the highlighting on the text itself has a lighter colour.
The Extractions Train tab is in public preview.
To validate your extractions via the Train tab, follow these steps:
- Go to Train.
- Go to Extraction.
- Select the label extraction that you want to validate.
- Once you select the label extraction you want to validate, confirm whether the displayed message is an applicable example of the label.
- Once you applied all the applicable labels, select Next: Annotate Fields.
- Validating extractions in the Train tab experience is similar to validating extractions in Explore.
The main difference is that you can see the messages in training batches.
- The confirm all and next button redirects you to the next message to annotate automatically.