Cover image

Branching Out of the Middle: How a ‘Tree of Agents’ Fixes Long-Context Blind Spots

TL;DR — Tree of Agents (TOA) splits very long documents into chunks, lets multiple agents read in different orders, shares evidence, prunes dead-ends, caches partial states, and then votes. The result: fewer hallucinations, resilience to the “lost in the middle” effect, and accuracy comparable to premium large models—while using a compact backbone. Why this matters for operators If your business parses contracts, annual reports, medical SOPs, or call-center transcripts, you’ve likely felt the pain of long-context LLMs: critical details buried mid-document get ignored; retrieval misses cross-paragraph logic; and bigger context windows inflate cost without guaranteeing better reasoning. TOA is a pragmatic middle path: it re-imposes structure on attention—not by scaling a single monolith, but by coordinating multiple lightweight readers with disciplined information exchange. ...

September 12, 2025 · 4 min · Zelina
Cover image

Memory Over Matter: How MemAgent Redefines Long-Context Reasoning with Reinforcement Learning

Handling long documents has always been a source of frustration for large language models (LLMs). From brittle extrapolation hacks to obscure compression tricks, the field has often settled for awkward compromises. But the paper MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent boldly reframes the problem: what if LLMs could read like humans—absorbing information chunk by chunk, jotting down useful notes, and focusing on what really matters? At the heart of MemAgent is a surprisingly elegant idea: treat memory not as an architectural afterthought but as an agent policy to be trained. Instead of trying to scale attention across millions of tokens, MemAgent introduces a reinforcement-learning-shaped overwriteable memory that allows an LLM to iteratively read arbitrarily long documents in segments. It learns—through reward signals—what to keep and what to discard. ...

July 4, 2025 · 4 min · Zelina
Cover image

Remember Like an Elephant: Unlocking AI's Hippocampus for Long Conversations

Humans famously “never forget” like elephants—or at least that’s how the saying goes. Yet, traditional conversational AI still struggles to efficiently manage very long conversations. Even with extended context windows up to 2 million tokens, current AI models face challenges in effectively understanding and recalling long-term context. Enter a new AI memory architecture inspired by the human hippocampus: one that promises to transform conversational agents from forgetful assistants into attentive conversationalists capable of months-long discussions without missing a beat. ...

April 25, 2025 · 4 min