- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Overview
The Exchange integration provides a convenient, easy-to-setup way to sync your development and production email data into Communications Mining™ in real time.
The Exchange integration continuously polls your Exchange server for new emails in the configured mailboxes. The emails are cleaned, enriched, and converted into Communications Mining comment objects, and can be accessed by users on the Communications Mining web platform, and by applications or bots via the Communications Mining API. The Exchange integration runs in the Communications Mining cloud.
The mailboxes to poll can be conveniently configured in the Communications Mining UI, which also offers the functionality to start or stop the integration, and to update the configuration parameters used to connect to your Exchange server.
- MICROSOFT EXCHANGE COMPATIBILITY
The Exchange integration is compatible with Exchange Online, and with Microsoft Exchange 2010-2019 server using Exchange Web Services (EWS).
- SECURITY
The Exchange integration polls the Exchange server by making authenticated GET requests over HTTPS. The Exchange integration receives data via the GET requests it initiates, and does not accept any inbound connections initiated elsewhere. The Exchange integration can be configured with specific ciphers.
It is sufficient to give the Exchange integration read-only access to your mailboxes.
The integration will continuously sync emails from each configured mailbox. If no starting time is provided, all emails will be synced.
message_id
to the hex-encoded comment ID
of the synced email. If multiple mailboxes containing the same email (e.g. due to being CC'ed on the same email) are synced
into the same bucket, the resulting comment will have the metadata of the last synced of the duplicate emails.
The integration will update the Folder user property of a comment when the corresponding email is moved to a different folder. The integration won't update the comment subject or body if either is updated in the email after it has been sent or received.
You can also enable attachment syncing at mailbox level on an Exchange integration. The streams API then makes the attachments retrievable via an attachment reference. Check more about syncing attachments in the Attachments and Using Exchange Integrations pages.
Integrations experiencing persistent issues will automatically be disabled at the integration level if the integration is incorrectly configured, or at the mailbox level, if the mailbox is incorrectly configured.
You will be notified to address the issue(s), and once resolved, you can re-enable the integration(s).
This happens when the mailbox can't be found, in case it was deleted, the service account doesn't have access to the Inbox, or the credentials expired and need to be updated.
- Incorrect credentials
- Expired credentials
- Reaching a quota limit for uploading data
- Misconfiguration of Exchange server details
- Invalid permissions for service account or app authentication
- Misspelled mailbox address
- Deleted mailbox