- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Preparing data for .CSV upload
- You must have assigned the Source - Manage and Comment - Manage permissions as an Automation Cloud™ user, and the Sources admin and Edit messages permissions as a legacy user to upload CSV files into a source.
- For more details on how to upload data from a .csv, along with common error messages, check Uploading a CSV file into a source.
Prior to uploading data into Communications Mining™, there are a few factors to take into consideration when preparing the data for the platform to ingest.
If you have been opening the .csv in Excel and making changes, this can lead to formatting issues that can affect the upload process. To avoid this, make any updates directly within the .csv file.
Additionally, check for the items listed in the following table before uploading your .csv into the platform. This helps you avoid any errors upon uploading, or data quality issues that will negatively impact the quality of model performance.
Item | Description |
Duplicate rows | Having the same data repeated multiple times across the data extract. |
Mismatched headers | Having the wrong headers aligned to the wrong data fields. |
Hanging rows or columns | Not having all the data contained in sequential rows. For example, having all messages in Row 1 to 10,000, but having a row with a cell containing data in row 19,999. |
Inconsistent date formatting | Different rows with inconsistent date formats. For example, having a number of messages in US date format, and a number of messages in EU date format, all in the same dataset, as this will have issues normalizing downstream. |
Incoherent sentences | These are sentences that contain an assortment of words without a clear syntactic
or semantic structure.
For example:
|
Inconsistent spacing | When there are an irregular number of spaces in between words.
For example:
|
Breaks in words | When there are breaks in the middle of a word.
For example:
|
Erroneous character encoding | When text data is not properly encoded, resulting in garbled or unreadable
characters.
For example:
|
Blank messages | Communications without any content included in the subject or body. |
Messages with lots of typos | Text data containing many spelling errors. |
Headers / footers | When there are headers or footers included. For example, spam warnings, virus scan warnings, and so on. |
Metadata included in the subject/body instead of as a metadata property | When metadata is included in the subject or body. For example:
|
Multiple messages combined into one message | When multiple messages that should have been split into separate thread messages are instead combined into a single communication. |