- Overview
- Model building
- Model validation
- Overview
- Evaluating model performance
- Gathering validation statistics
- Iterating on the taxonomy
- Model deployment
- Frequently asked questions

Unstructured and complex documents user guide
Evaluating model performance
- Healthy models have a Good or Excellent project score and no field performance warnings.
- The project score is calculated based on the average F1 Score across all fields.
For a more detailed evaluation, view the dashboard based on the model version in the Measure tab. This includes the overall score of the project, as well as the individual performance of extractions.
Model versions capture the current state of the project at the time the version was created. These can be published to save them and use them in an automation, or starred in Measure to save their performance statistics. Current performance can be checked against previous versions to ensure continued performance improvement during iteration on instructions.
- Go to the Measure tab. The page can display the latest version of the model or the version you pinned beforehand.
- Expand Model version, and select an older version to compare performance across different model versions.
- Check and compare the performance of the selected version displayed in the dashboard.
A new model version is created each time you make changes to your taxonomy, including instructions, or to the model settings. The latest version of the model is always available, but you can also star, that is to lock in place, a specific model version to always show the performance statistics in the dashboard.
- Expand the Model Version drop-down menu to view the list with all available versions.
- Select the star icon next to the model version that you want to always be displayed at the top of the list and on the dashboard.
Note: Starring a model version does not save the model version itself, only the performance statistics. To save a model version, it must be published in the Publish tab.