- Getting started
- UiPath Agents in Studio Web
- About UiPath Agents
- Licensing
- Agents and workflows
- Best practices for building agents
- Choosing the best model for your agent
- Best practices for publishing and deploying agents
- Best practices for context engineering
- Best practices for DeepRAG and Batch Transform: JIT vs. index-based strategies
- Prompts
- Working with files
- Contexts
- Escalations and Agent Memory
- Evaluations
- Agent traces
- Agent score
- Managing UiPath agents
- UiPath Coded agents

Agents user guide
Best practices for DeepRAG and Batch Transform: JIT vs. index-based strategies
Choosing your data access approach: JIT vs. Index-based strategies
When configuring how your agent accesses and reasons over data, choose the approach that best matches your data shape, lifecycle, and performance requirements.
There are two primary patterns:
- Just-in-time (JIT) tools that process files passed at runtime.
- Index-based Context Grounding that queries a pre-built, persistent index.
DeepRAG (JIT) and DeepRAG (index search strategy) use the same underlying synthesis capability. The difference is how documents reach the engine:
- JIT: Files are provided at runtime.
- Index-based: Documents are pre-ingested into a persistent index.
Batch Transform is different. It performs structured, row-level transformations on tabular data. It does not perform document retrieval or long-form synthesis.
Quick reference
| Capability | DeepRAG (JIT tool) | Batch Transform (JIT tool) | Semantic Search (index) | DeepRAG (index search strategy) |
|---|---|---|---|---|
| Setup required | None | None | Index + ingestion | Index + ingestion |
| Input format | PDF or TXT | CSV | Pre-indexed documents | Pre-indexed documents (PDF/TXT) |
| Data changes per run | Yes (ideal) | Yes (ideal) | No (stable corpus) | No (stable corpus) |
| Output type | Comprehensive synthesis with citations1 | Enriched CSV file | Relevant snippets | Comprehensive synthesis with citations |
| Speed | Minutes (30+ for ~1,000 pages) | Minutes | Instant | Minutes (30+ for ~1,000 pages) |
| Best for | Per-run document analysis | Per-run structured data processing | Fast fact lookup | Deep research across large corpora |
1 TXT inputs do not currently support citations.
JIT tools: the recommended default
For most agent implementations, JIT tools are the right starting point.
They require no index setup, no ingestion workflow, and no storage configuration. Files are passed directly to the agent when it runs, and the tool handles processing automatically. This makes JIT especially well suited for use cases where the document set changes with each invocation.
Two JIT tools are available: DeepRAG (JIT) and Batch Transform.
DeepRAG (JIT tool)
Use DeepRAG (JIT) when your agent needs to read and reason over PDF or TXT files provided at runtime and produce a synthesized, grounded response.
The DeepRAG algorithm performs a structured, multi-step research process. It plans sub-tasks, iterates across the provided documents, and synthesizes findings into a comprehensive output. For PDF inputs, responses include citations. (TXT inputs do not currently support citations.)
This approach works best when each run processes a different document set, such as customer files, contracts, case records, or reports that vary per invocation. It prioritizes completeness and traceability over speed and typically completes in minutes, depending on document size.

Batch Transform (JIT tool)
Batch Transform is intended for structured data processing. It operates on CSV files provided at runtime and applies consistent logic to each row.
Rather than synthesizing documents, Batch Transform enriches data. It processes records independently and generates an updated CSV file that can include new columns, classifications, scores, or other derived values.
This makes it appropriate for labeling, scoring, enrichment, and rule-based transformations across structured datasets.
Index-based Context Grounding
Index-based grounding requires documents to be ingested into a Context Grounding index before the agent runs. Although this setup introduces additional configuration, it becomes valuable when working with a large, stable corpus that is reused across many agent executions.
Typical examples include policy libraries, knowledge bases, regulatory repositories, or long-lived documentation collections shared by multiple users or processes.
When configuring an index-backed agent, you select a search strategy.

Semantic Search
Semantic Search is the default index strategy. It performs fast, lightweight retrieval and returns the most relevant document chunks.
This strategy is appropriate when the agent needs to look up specific facts or extract targeted information quickly. It works well for question-answering patterns and repeated queries against a shared repository where speed is important.
DeepRAG (index search strategy)
DeepRAG can also operate over an index. In this mode, it analyzes and connects information across many documents and produces a comprehensive, citation-backed response.
Compared to Semantic Search, this approach is slower and consumes more AI Units, but it supports deeper research tasks. It is well suited for scenarios that require thorough analysis across hundreds of pages, such as contract review, regulatory assessment, or medical record synthesis within a stable corpus.
How to decide
Use the following guidance to select the correct approach:
- Use DeepRAG (JIT) when documents change with each run and you need synthesis with citations from runtime files.
- Use Batch Transform (JIT) when processing structured CSV data row by row.
- Use Semantic Search (index) when querying a shared, stable corpus for fast, targeted retrieval.
- Use DeepRAG (index strategy) when performing deep research over a large indexed corpus and comprehensive, citation-backed answers are required.
When in doubt, start with JIT tools. Move to index-based strategies only when you need shared, persistent document grounding across multiple agent executions.