- Getting started
- Balance
- Clusters
- Concept drift
- Coverage
- Datasets
- General fields (previously entities)
- Labels (predictions, confidence levels, hierarchy, etc.)
- Models
- Streams
- Model Rating
- Projects
- Precision
- Recall
- Reviewed and unreviewed messages
- Sources
- Taxonomies
- Training
- True and false positive and negative predictions
- Validation
- Messages
- Administration
- Manage sources and datasets
- Understanding the data structure and permissions
- Create a data source in the GUI
- Uploading a CSV file into a source
- Create a new dataset
- Multilingual sources and datasets
- Enabling sentiment on a dataset
- Amend a dataset's settings
- Delete messages via the UI
- Delete a dataset
- Delete a source
- Export a dataset
- Using Exchange Integrations
- Preparing data for .CSV upload
- Model training and maintenance
- Understanding labels, general fields and metadata
- Label hierarchy and best practice
- Defining your taxonomy objectives
- Analytics vs. automation use cases
- Turning your objectives into labels
- Building your taxonomy structure
- Taxonomy design best practice
- Importing your taxonomy
- Overview of the model training process
- Generative Annotation (NEW)
- Dastaset status
- Model training and annotating best practice
- Training with label sentiment analysis enabled
- Train
- Introduction to Refine
- Precision and recall explained
- Precision and recall
- How does Validation work?
- Understanding and improving model performance
- Why might a label have low average precision?
- Training using Check label and Missed label
- Training using Teach label (Refine)
- Training using Search (Refine)
- Understanding and increasing coverage
- Improving Balance and using Rebalance
- When to stop training your model
- Using general fields
- Generative extraction
- Using analytics and monitoring
- Automations and Communications Mining
- Licensing information
- FAQs and more
Overview
User permissions required for reports: 'View Sources' AND 'View Labels'.
The Reports page is where users can see in-platform reporting on their dataset. The reports are all filterable to allow users to see the views that are most important to them.
You can navigate to the Reports page via the top navigation bar.
Depending on the type of data, the platform has up to 6 tabs in Reports. Users can toggle between message and thread-level reports, if the data is in thread form (e.g. call transcripts and email chains). If not, the message filter will be the default.
- Dashboard - allows you to create custom dashboard views using the data from the other tabs.
- Label Summary - presents high level summary statistics for labels.
- Trends - presents charts for message volume and label volume and sentiment over a given time period.
- Segments - presents charts of label volumes versus message metadata fields, e.g. sender domain.
- Threads - presents charts of thread volumes and label volumes within a thread (only accessible when the 'Thread' filter is applied)
- Comparison - allows you to compare different cohorts of data against each other.
At the top of each tab in the Reports page, you will see the total number of messages contained in the dataset, net sentiment (if sentiment analysis is enabled) and the date period for the selected data.
If you apply user property, general field or label filters, these stats will update based on the filters and selections that you've made.
If you filter to multiple labels but have no other filters applied, this will show you the total number of messages in the dataset that are likely to have at least one of the selected labels predicted.