ixp
latest
false
Important :
Communications Mining is now part of UiPath IXP. Check the Introduction in the Overview Guide for more details.
UiPath logo, featuring letters U and I in white

Communications Mining user guide

Last updated Aug 1, 2025

Precision

Precision measures the proportion of the predictions the model makes that were actually correct. This means that it identifies what proportion were true positives of all the positive predictions that the model made.

Precision = true positives / (true positives + false positives)

For example, for every 100 messages in a dataset predicted as having the Request for information label, the precision is the percentage of times that the Request for information was correctly predicted out of the total times it was predicted.

A 95% precision would mean that for every 100 messages predicted as having a specific label, 95 would be correctly annotated, and 5 would be wrongly annotated, which means that they should not have been annotated with that label.

For a more detailed explanation on how precision works, check Precision and recall explained.

Average Precision (AP)

The AP score for an individual label is calculated as the average of all the precision scores at each recall value, between 0 and 100%, for that label.

Essentially, the average precision measures how well the model performs across all confidence thresholds for that label.

Mean average precision (MAP)

MAP is one of the most useful measures of overall model performance and is an easy way to compare different model versions against one another.

The MAP score takes the mean of the average precision score for every label in your taxonomy that has at least 20 examples in the training set used in Validation.

Typically, the higher the MAP score, the better the model is performing overall, although, this is not the only factor that should be considered when understanding how healthy a model is. It is also important to know that your model is unbiased and has high coverage.

Mean precision at recall

The mean precision at recall is another metric that shows the overall model performance. It is presented graphically as a precision at recall curve averaged over all of the labels in your taxonomy.



  • Average Precision (AP)
  • Mean average precision (MAP)
  • Mean precision at recall

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo
Trust and Security
© 2005-2025 UiPath. All rights reserved.