document-understanding
latest
false
UiPath logo, featuring letters U and I in white

Document Understanding User Guide

Last updated Apr 10, 2025

Measure

You can check the overall status of your project and check the areas with improvement potential from the Measure section.

Project measure

The main measurement on the page is the overall Project score.

This measurement factors in the classifier and extractor scores for all document types. The score of each factor corresponds to the model rating and can be viewed in Classification Measure and Extraction Measure respectively.

The model rating is a functionality intended to help you visualize the performance of a classification model. It is expressed as a model score from 0 to 100 as follows:
  • Poor (0-49)
  • Average (50-69)
  • Good (70-89)
  • Excellent (90-100)

Regardless of the model score, it is up to you to decide when to stop training, depending on your project needs. Even if a model is rated as Excellent, that doesn't mean that it will meet all business requirements.

Classification measure

The Classification score factors in the performance of the model as well as the size and quality of the dataset.

Note: The Classification score is only available if you have more than one document type created.
If you click on Classification, two tabs are displayed on the right side:
  • Factors: Provides recommendations on how to improve the performance of your model. You can get recommendations on dataset size or trained model performance for each document type.
  • Metrics: Provides useful metrics, such as the number of train and test documents, precision, accuracy, recall, and F1 score for each document type.


Extraction measure

The Extraction score factors in the overall performance of the model as well as the size and quality of the dataset. This view is split into document types. You can also go straight to the Annotate view of each document type by clicking Annotate.

If you click on any of the available document types from the Extraction view, three tabs are displayed on the right side:
  • Factors: Provides recommendations on how to improve the performance of your model. You can get recommendations on dataset size (number of uploaded documents, number of annotated documents) or trained model performance (fields accuracy) for the selected document type.
  • Dataset: Provides information about the documents used for training the model, the total number of imported pages, and the total number of labelled pages.
  • Metrics: Provides useful information and metrics, such as the field name, the number of training status, and accuracy for the selected document type. You can also access advanced metrics for your extraction models using the Download advanced metrics button. This feature allows you to download an Excel file with detailed metrics and model results per batch.


Dataset diagnostics

The Dataset tab helps you build effective datasets by providing feedback and recommendations of the steps needed to achieve good accuracy for the trained model.



There are three dataset status levels exposed in the Management bar:

  • Red - More labelled training data is required.
  • Orange - More labelled training data is recommended.
  • Light green - Labelled training data is within recommendations.
  • Dark green - Labelled training data is within recommendations. However, more data might be needed for underperforming fields.

If no fields are created in the session, the dataset status level is grey.

Compare model

You can compare the performance of two versions of a classification or extraction model from the Measure section.

Classification model comparison

To compare the performance of two versions of a classification model, first navigate to the Measure section. Then, select Compare model for the classification model you are interested in.

You can choose the versions you want to compare from the drop-down list at the top of each column. By default, the current version, indicating the most recent version available, is selected on the left and the most recent published version on right.

Figure 1. Classification model comparison

Comparing classification models relies on four key metrics:
  • Precision: the ratio of correctly predicted positive instances to the total instances that were predicted positive. A model with a high precision indicates fewer false positives.
  • Accuracy: the ratio of correct predictions (including both true positives and true negatives) out of total instances.
  • Recall: the proportion of actual positive cases that were correctly identified.
  • F1 score: the geometric mean of precision and recall, aiming to strike a balance between these two metrics. This serves as a trade-off between false positives and false negatives.

The order of document types displayed is the one used in the latest version from the comparison. If a document type is not available in one of the compared versions, the values for each measure are replaced with N/A.

Note: If a field was removed in the current version but it was available in the older version before the Compare model feature was available, the name is replaced with Unknown.

Extraction model comparison

To compare the performance of two versions of an extraction model, first navigate to the Measure section. Then, select Compare model for the extraction model you are interested in.

You can choose the versions you want to compare from the drop-down list at the top of each column. By default, the current version, indicating the most recent version available, is selected on the left and the most recent published version on right.

Figure 2. Extraction model comparison

Comparing extraction models relies on the following key metrics:
  • Field name: the name of the annotation field.
  • Content type: the content type of the field:
    • String
    • Number
    • Date
    • Phone
    • ID Number
  • Rating: model score intended to help you visualize the performance of the extracted field.
  • Accuracy: the fraction of the total number of predictions that the model makes that are correct.

The order of field names displayed is the one used in the latest version from the comparison. If a field name is not available in one of the compared versions, the values for each measure are replaced with N/A.

Note: If a field was removed in the current version but it was available in the older version before the Compare model feature was available, the name is replaced with Unknown.

You can also compare the field score for tables from the Table section.

You can download the advanced metrics file for each version from the comparison page from the Download advanced metrics button.

  • Project measure
  • Classification measure
  • Extraction measure
  • Dataset diagnostics
  • Compare model
  • Classification model comparison
  • Extraction model comparison

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2025 UiPath. All rights reserved.