- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create or delete a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Export a dataset
- Using Exchange Integrations
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Communications Mining User Guide
Validate and annotate generated extractions
Give sufficient examples, so that the model can provide you with validation statistics. The validation statistics help you understand how well your extractions perform. Additionally, it allows you to fine-tune your extractions.
Review the results and:
- Accept the extraction(s) if they are all correct.
- Correct the extractions if there are any incorrect predictions.
- Mark extractions as missing if they are not present in the message.
- Configure any additional fields if any are missing that are required to enable end-to-end automation.
Why is fine-tuning important?
Fine-tuning allows you to use the annotations gathered to improve the performance of the extraction model.
It allows you to take the out-of-the-box model and enhance performance for your use cases.
When can you stop?
Stop once you provide at least 25 examples of label extractions for the model to use in its validation process. Check validation and see if the performance is sufficient, or whether more examples are needed.
# | Description |
1 | If all the field predictions are correct, select Confirm all to validate in bulk that the annotations are correct. |
2 | To add new extraction fields select the plus button, next to General fields or next to any of the extraction fields. To edit any existing fields, select the vertical ellipsis next to General fields or next to any of the extraction fields. |
3 | In the side panel, if you check the box next to the predicted extractions, you confirm that a field annotation is correct, on an extraction level. |
4 | Under each field, you can find the data point that was predicted
by the model.
If the prediction is incorrect, select the x button to adjust the field with the correct one. |
5 | The predicted data points are marked in the original message.
|
6 | To add or modify any fields, hover next to the + button, on the respective general field or extraction field section. |
7 | To expand the fields displayed for General fields or specific extraction fields, select the dropdown button. |
- Select Explore next to a dataset to access that particular dataset.
- Under the Explore tab, select Annotate fields.
- In the side panel, select the Predict extractions button.
- In the same side panel,
matching indicators marked by a red or green circle are displayed next to
the model predictions.
Note: Matching indicators show whether model predictions match the annotations you made for the extraction fields. The indicators are marked in the user interface by a red or green circle at the extraction field and extraction levels. In case the values do not match, the values either mismatch or are missing. You can re-run the latest model predictions to confirm if they match the provided annotations.The states that can be returned by the matching indicators are:
- Green - predictions match the annotations. Is visible at the extraction level only if all extraction fields have green indicators.
- Red - either the predictions do not match the annotations, or a prediction is missing an annotation. Is visible at the extraction level if any of the fields in the extraction have a red indicator.
The following image shows what an extraction looks like in its unconfirmed state. On the right pane, the extraction is marked as not confirmed, and the highlighting on the text itself has a lighter colour.
You validate extractions under the Train tab experience in a similar way to the Explore tab.
To validate your extractions, apply the following steps:
- Select Train next to a dataset to access that particular dataset.
- Under the Train tab, select Extractions.
- Select the label extraction you want to validate.
- Confirm if the displayed message is an applicable example of the label.
- Once you apply all the applicable labels, select Next: Annotate Fields.
- On the page that follows, matching indicators
marked by a red or green circle are displayed next to the model predictions.
Note: Matching indicators show whether model predictions match the annotations you made for the extraction fields. The indicators are marked in the user interface by a red or green circle at the extraction field and extraction levels. In case the values do not match, you will notice that the values either mismatch or are missing. You can re-run the latest model predictions to confirm if they match the provided annotations.The states that can be returned by the matching indicators are:
- Green - predictions match the annotations. Is visible at the extraction level only if all extraction fields have green indicators.
- Red - either the predictions do not match the annotations, or a prediction is missing an annotation. Is visible at the extraction level if any of the fields in the extraction have a red indicator.
- Select Confirm all and next to view the next message to annotate automatically.