- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Maintenance
Learn more about managing per-tenant quotas for Communications Mining™, and the model version deprecation process.
All users can view quotas, but you need Tenant Admin permission to request changes to quotas.
The Quotas page allows users to view and manage quotas that are enforced for Communications Mining™ in the current tenant. Users with the Tenant Admin permission are able to easily request increases or decreases to each quota. For certain quotas, users will also be notified via Notification Service that they are approaching the limit and should request an increase.
Example on how to change the limit for quotas:
- Select the Edit Sources quota limit icon from the cells in the end column of the table.
- A pop-up is displayed where you can set the new limit, from the up and down arrows.
- Select the Update button to save the changes or Cancel, to close the pop-up, without applying any changes.
The Deprecated Models page shows you any model versions for datasets in your tenant that will soon be deprecated. All the production datasets are using more recent, improved model versions.
To ensure optimal functionality and security, older pinned model versions (which will be at least 12 months old) may be scheduled for deprecation.
To ensure a smooth transition, all deprecated models are flagged well in advance. You can find early deprecation indicators both within this page and in the Models page of the relevant dataset. This proactive approach gives you sufficient time to adjust your work without disruption.
Following the initial announcement, you have a transition period of no less than three months. After this period ends, deprecated versions will be deemed unsupported. Consequently, unsupported versions will not be accessible via the API.
You have to unpin deprecated model versions after pinning a more recent model version within the same dataset. This ensures the smooth continuation of service and API calls during the transition.