- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Generating your extractions
The Extraction validation process is required to understand the performance of these extractions through Validation.
Decide on the extraction that you want to train. Report > Statement of Accounts is an example of a schema to train.
To automate this process, extract the following data points to input into a downstream system:
Use this training mode as required, to boost the number of training examples for each extraction, that is, a set of fields assigned to a label, to at least 25. This allows the model to accurately estimate the performance of the extraction.
To generate your extractions, proceed as follows:
- Navigate to the Explore tab.
- Select Label, and then select the label you want to generate extractions on.
- Select Predict extractions, which generates extractions on a per page basis in Explore. This means that it applies predictions on all the comments on a given page.
Note: Each time you go to the next page, you need to select Predict extractions again.
In addition, you can generate extractions on an individual comment level by selecting Annotate Fields, and then Predict extractions. For more details, check Predicting extractions.
-
After making the extraction predictions, if the model picked up field extractions on the comment, it highlights the relevant span in the text. The model displays the extracted value in the side panel. To learn how to validate the predicted values, check Validating and annotating generated extractions.
This section describes what happens when you predict extractions:
- The model uses generative models and maps each of the data points that you previously defined in our extraction schema, to relate to them to an intent, that is, a label.
- It extracts and returns them in a structured schema, for an SME to go through and confirm.
- The structured schema is intended to enable more complex automations, and is structured in JSON format in the API for consumption by any downstream automations.