- Introduction
- Setting up your account
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields
- Labels (predictions, confidence levels, label hierarchy, and label sentiment)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Annotated and unannotated messages
- Extraction Fields
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Access Control and Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Creating or deleting a data source in the GUI
- Uploading a CSV file into a source
- Preparing data for .CSV upload
- Creating a dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amending dataset settings
- Deleting a message
- Deleting a dataset
- Exporting a dataset
- Using Exchange integrations
- Model training and maintenance
- Understanding labels, general fields, and metadata
- Label hierarchy and best practices
- Comparing analytics and automation use cases
- Turning your objectives into labels
- Overview of the model training process
- Generative Annotation
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Training chat and calls data
- Understanding data requirements
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and Recall
- How validation works
- Understanding and improving model performance
- Reasons for label low average precision
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining™
- Developer
- Exchange Integration with Azure service user
- Exchange Integration with Azure Application Authentication
- Exchange Integration with Azure Application Authentication and Graph
- Fetching data for Tableau with Python
- Elasticsearch integration
- Self-hosted Exchange integration
- UiPath® Automation Framework
- UiPath® Marketplace activities
- UiPath® official activities
- How machines learn to understand words: a guide to embeddings in NLP
- Prompt-based learning with Transformers
- Efficient Transformers II: knowledge distillation & fine-tuning
- Efficient Transformers I: attention mechanisms
- Deep hierarchical unsupervised intent modelling: getting value without training data
- Fixing annotating bias with Communications Mining™
- Active learning: better ML models in less time
- It's all in the numbers - assessing model performance with metrics
- Why model validation is important
- Comparing Communications Mining™ and Google AutoML for conversational data intelligence
- Licensing
- FAQs and more

Communications Mining user guide
Overview
Generative Extraction (GenEx) is an innovative new feature for UiPath® Communications Mining™ that leverages Generative AI to understand the complex relationships between multiple requests and the data points required to process them.
An email can contain multiple requests, with each one requiring multiple fields extracted to enable automation. Automating this end-to-end requires more than just correctly extracting the field itself, but also an understanding of how each of these elements are related to one another. GenEx significantly advances the scope of what’s possible for communications-based automation.
Generative Extraction leverages the very latest in NLP capabilities, and also provides the necessary guardrails required by enterprises to implement complex communication-based automations for business processes.
More complex processes and communications containing multiple different requests can now also be prime candidates for automation.
For some use cases, extractions can be generated with no training, and can further be fine-tuned with little training data.
- Recognizes relationships - Generative Extraction helps identify the relationship between different concepts and data points in communications. For example, identifying for a policy address change request, the policy number of the policy that needs to be updated, the old address, and the new address to update it to.
- Uses generative models - Uses state-of-the-art generative large language models (LLMs) to predict specific intents and map each of the data points relating to them, extracting them in a structured schema for automation — all with minimal training.
- Automates multi-requests - This enables the automation of multiple requests within a single communication — whether it’s the same request repeated for different data points, or numerous different request types each with their own schema of data points required for automated processing.
The following steps describe the end-to-end process of validating extractions. Each step is covered in more details in the subsequent sections.
- Define your extraction schema:
- Identify what processes, that is, the labels you want to automate, and the data points, that is, the fields, that need to be captured to enable automation.
- Create the corresponding extraction schema.
- Generate extractions:
- Generating extractions enables you to significantly speed up the process of finding and relating data. For some use cases, the platform does not require training examples to generate its extractions.
- Use the generative capabilities of the platform to create your initial extractions.
- Validate and correct extractions:
- Review the extractions of the platform and accept them if they are accurate, or correct them if they are not.
- The platform is flexible and easy, and you can add new extraction schemas at any point in the training process.
- Review the validation for extractions:
- Verify how well your extractions are performing in the Validation tab.
- Determine if your extractions are at a performance level suitable for your use case.
When setting up your extraction schema, you need to decide what processes, that is, what labels, you want to automate.
For the platform to understand the relationship between the process, and what data points need to be extracted, the platform prompts you to provide the appropriate data points. The configuring fields section goes into more detail on best practices, and how this specifically works.
- If you have previously worked with entities in Communications Mining™, as of 2024.4, all your existing entities automatically transition to general fields.
- All the existing settings on your entities migrate over to the respective settings for the corresponding general field.
- If you have any general fields that you want to switch over to extraction fields, you need to re-create those fields as extraction fields, and apply the appropriate amount of training examples (if applicable).
- Field types have the same names and configurations as the old entities previously set up. These are mapped over using the api-name.
- If you have any existing automations that use previous entities, these automations will not be impacted.
- Automating processes using Generative Extraction is slightly different from how processes were previously automated using entities. Check the Automating with GenEx section of this guide for more details.