ixp
latest
false
- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Pinning and tagging a model version
- Deleting a pinned model
- Adding new labels to existing taxonomies
- Maintaining a model in production
- Model rollback
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more
Pinning and tagging a model version
Important :
Communications Mining is now part of UiPath IXP. Check the Introduction in the Overview Guide for more details.

Communications Mining user guide
Last updated Aug 1, 2025
Pinning and tagging a model version
Note: You must have assigned the Source - Read and Dataset -
Read permissions as an Automation Cloud user, or the View
sources and View labels permissions as a legacy
user.
Every time you train the platform on your data, that is, annotating any messages, a new version of the model associated with your dataset is created. As these models are large and complex, previous versions are not automatically stored in our databases, purely as the storage requirements would be incredibly large.
The latest version of the model will always be readily available, but users are able to pin a specific model version that they would like to save. They can also choose to tag pinned models with a Live or Staging tag.
- Pinning a model gives you determinism over predictions, particularly for when you are using Streams. This means that you can be confident of precision and recall scores for this version of the model, and future training events will not alter them.
- In the Validation page, you can view the validation scores for previous pinned model versions. This allows you to compare the scores over time, and notice how your training has improved your model.
To pin a model version:
- Navigate to the models page using the navigation bar.
- Select the pin toggle to save the current model version.