Opening — Why this matters now
Enterprises are suddenly discovering that “deep research agents” are not magical interns but probabilistic engines with wildly variable costs. Every additional query to an LLM carries a token bill; every recursive branch in a research workflow multiplies it. As agentic systems spread from labs to boardrooms, a simple question emerges: Can we control what these agents do—rather than hope they behave?
Static-DRA (Static Deep Research Agent), introduced in the paper A Hierarchical Tree-based Approach for Creating Configurable and Static Deep Research Agents fileciteturn0file0, answers this with a refreshing stance: give the user a brake pedal. Not in the metaphorical sense—literally, two knobs labelled Depth and Breadth. With them, companies can dial research intensity up or down based on budget, urgency, or appetite for comprehensiveness.
In a season where agentic systems feel increasingly like runaway processes, this paper offers something businesses desperately need: controllable intelligence.
Background — Context and Prior Art
Traditional RAG pipelines followed a rigid, two-step ritual: retrieve some documents, then pass them to a generator. Elegant, predictable—and deeply incapable of recursive exploration. As research tasks evolved into multi-step, multi-hop problems, the industry responded with dynamic agents: OpenAI’s Deep Research, Gemini’s agentic workflows, Perplexity’s autonomous reasoning, and LangChain’s modular planners.
But dynamic agents come with their own frictions: unpredictability, cost spikes, and opacity. When the system decides how deep to search or how many branches to expand, its creators often lose operational control.
Static-DRA steps backward—deliberately. It replaces dynamic planning with a static, deterministic tree structure. Instead of agents deciding how far to explore, the user chooses a maximum Depth and Breadth. This model is not dumber; it is simply governable. And in enterprise environments, governability often beats cleverness.
Analysis — What the paper actually proposes
At its core, Static-DRA is a hierarchical tree built from three agent types:
- Supervisor Agent — decides whether a task can be meaningfully split, respecting the depth limit.
- Independent Agent — splits topics into sub-topics (up to the Breadth limit) and spawns child supervisors.
- Worker Agent — performs final research via web search + LLM synthesis.
This hierarchy is static, predictable, and configurable. The two central parameters define computational behaviour:
- Depth: how many levels the agent is allowed to recursively decompose the initial query.
- Breadth: how many sub-topics each level may spawn.
A clever design choice emerges mid-paper: as the agent goes deeper, Breadth is halved each level. The rationale is intuitive—broad exploration at the top, tighter focus as the query becomes more specific.
A visual simplification of this behaviour:
| Level | Max Subtopics | Comment |
|---|---|---|
| 1 | b | Broad exploration |
| 2 | b - 2 | Reduced branching |
| 3 | b - 4 | Even narrower |
This simple step reinserts hierarchy into agentic reasoning: high-level questions get breadth, low-level ones get depth.
Why this matters
This architecture introduces cost-transparent autonomy. Instead of a black box that might recursively research for minutes or hours, users can look at their Depth and Breadth settings and—quite literally—predict the bill.
Findings — Results with Visualization
Static-DRA was benchmarked on DeepResearch Bench using the RACE evaluation framework. Across the experiments, one result stands out: Depth and Breadth scale linearly in cost and superlinearly in quality.
Below is a distilled version of the patterns seen in the paper’s figures (not reproduced, but analytically reinterpreted):
Effect of Depth/Breadth on Report Size and Subtopics
| Configuration | Subtopics Generated | Report Size (kB) | Overall Score |
|---|---|---|---|
| d1 b2 | 4–5 | ~4.5 | ~0.22 |
| d2 b3 | 8–9 | ~11.3 | ~0.34 |
| d2 b5 | 13+ | ~74.5 | ~0.41 |
The pattern is unambiguous:
More depth and breadth → more comprehensive research → higher scores → higher cost.
This is precisely the kind of predictable trade-off enterprises need.
Topic-specific performance
Interestingly, Static-DRA performs above-average in domains with structured knowledge (e.g., Health, Religion, History) and below-average in fuzzy or style-driven topics (e.g., Entertainment, Games).
A practical interpretation: Static frameworks thrive when the world is orderly.
Implications — Why businesses should care
Static-DRA isn’t just another agent architecture; it’s a governance pattern.
For enterprises evaluating whether to deploy research agents internally, Static-DRA provides:
- Predictable cost envelopes — every unit of Depth and Breadth is a line item, not a surprise.
- Deterministic behaviour — no dynamic replanning means fewer execution shocks.
- Explainability — the research tree is the explainability artifact.
- Easier compliance review — auditors love deterministic systems.
- Modular extensibility — swapping in newer LLMs is trivial.
The industry’s current obsession with highly adaptive agentic systems overlooks a harder truth:
Not all organisations want agents that think for themselves. Many prefer agents that think within boundaries.
Static-DRA offers that boundary.
Conclusion — Wrapping up
In a landscape racing toward fully dynamic, self-replanning agents, Static-DRA is a useful counterweight. Its tree-based, parameter-controlled structure offers a more governance-friendly and resource-aware approach to deep research. It won’t match the bleeding edge of dynamic agents—but it will give you transparency, predictability, and operational peace of mind.
Sometimes, the smartest agent is the one you can actually manage.
Cognaptus: Automate the Present, Incubate the Future.