For centuries, reading has meant scrolling—page by page, line by line. But what if reading could mean navigating a tree?

TreeReader, a new system from researchers at the University of Toronto and the Vector Institute, challenges the linearity of academic literature. It proposes a reimagined interface: one where large language models (LLMs) summarize each section and paragraph into collapsible nodes in a hierarchical tree, letting readers skim, zoom, and verify with surgical precision. The result is more than a UX tweak—it’s a new cognitive model for how scholars might interact with complex documents in the era of AI.

The Problem with PDFs

Despite their dominance, PDFs are a legacy format designed for printing, not understanding. Academic papers are hierarchical by nature—section > subsection > paragraph > figure—but PDFs flatten this into a long scroll. The result? Cognitive overload, especially in literature reviews or when jumping between topics.

Readers often:

  • Miss key ideas buried deep in method sections,
  • Waste time parsing familiar background info,
  • Struggle to verify LLM-generated summaries that lack clear sourcing.

LLM chatbots (like ChatGPT or Elicit) have helped with summarization, but they lack structure-awareness. You can’t easily tell whether a summary comes from the Introduction or the Results, or drill into a specific paragraph’s evidence.

TreeReader’s Core Idea: Hierarchical Summarization + Navigable UI

TreeReader restructures academic papers into an interactive tree, where every node represents a section, paragraph, table, or figure. Each node shows a concise GPT-4o-generated summary, with optional access to the full text.

Interface Design:

Panel Function
Left Navigation tree with expandable nodes
Middle Scrollable column of summaries (or full text on demand)
Right Context: figures, source references, or subsection previews

The summarization is recursive:

  • Paragraphs are summarized individually.
  • Section summaries are generated based on child paragraph summaries.
  • All summaries include source references, shown on hover.

This addresses three key user frustrations identified through formative interviews:

  1. Too much low-priority information in long papers.
  2. Unreliable or overly verbose LLM summaries.
  3. Lack of transparency in AI-generated content.

Does It Actually Work?

In a controlled study, 5 graduate-level CS researchers used TreeReader and a traditional PDF reader to read two scientific review papers. Each participant skimmed for 5 minutes, then deep-read for 25 minutes.

Results (Figure summaries omitted for brevity):

  • Skimming: TreeReader outperformed PDFs in all metrics: grasping structure, identifying key ideas, and understanding the paper’s goals.
  • Deep Reading: Mixed results. Some users found TreeReader helped navigate complex arguments; others struggled to locate fine-grained details quickly.
  • Cognitive Load: TreeReader reduced reported mental effort, frustration, and perceived difficulty.

One participant said: > “I would definitely use TreeReader every day for my initial literature review.”

However, limitations remain:

  • No Ctrl+F search.
  • Lack of real-world distractions (like multitasking tabs).
  • Small user sample (N=5).

Still, the signal is clear: when it comes to information targeting and structural awareness, TreeReader’s approach is compelling.

Implications: From Reading to Sensemaking

TreeReader doesn’t just summarize papers—it changes how we sense-make. By encouraging exploration through selective expansion and providing traceable key points, it turns passive reading into active interrogation.

This aligns with broader HCI trends:

  • Tools like ScholarMate, Sensecape, and IdeaSynth are shifting AI interfaces from automation to scaffolded reasoning.
  • TreeReader fits this ethos, treating the LLM not as an answer machine but as a structural explainer and cognitive assistant.

Where TreeReader Fits in the AI Productivity Stack

TreeReader’s design suggests a future where:

  • Literature review tools start with tree-based distillation, not abstracts.
  • Peer reviewers can triage submissions more efficiently.
  • Researchers can trace LLM summaries back to source evidence, improving trust.

It’s not hard to imagine TreeReader-like interfaces becoming the frontend for semantic search engines, automated reviewers, or multi-document synthesis agents.

Final Thought

TreeReader doesn’t eliminate the work of reading. It just makes the hierarchy visible—and that, cognitively speaking, changes everything.


Cognaptus: Automate the Present, Incubate the Future.