Cover image

Backtrack to Breakthrough: Why Great AI Agents Revisit

TL;DR Agentic performance isn’t just about doing more; it’s about going back. In GSM-Agent—a controllable, tool-using version of GSM8K—top models only reach ~65–68% accuracy, and the strongest predictor of success is a high revisit ratio: deliberately returning to a previously explored topic with a refined query. That’s actionable for enterprise AI: design agents that can (1) recognize incomplete evidence, (2) reopen earlier lines of inquiry, and (3) instrument and reward revisits. ...

October 3, 2025 · 4 min · Zelina
Cover image

Memory That Fights Back: How SEDM Turns Agent Logs into Verified Knowledge

TL;DR Most “agent memory” is a junk drawer: it grows fast, gets noisy, and slows everything down. SEDM (Self‑Evolving Distributed Memory) proposes an auditable, efficiency‑first overhaul. It verifies each candidate memory by replaying the exact run in a Self‑Contained Execution Context (SCEC), assigns an initial utility‑aligned weight, and then self‑schedules what to retrieve next. The result: higher task accuracy with fewer tokens versus strong memory baselines on FEVER and HotpotQA. ...

September 17, 2025 · 5 min · Zelina
Cover image

Branching Out of the Middle: How a ‘Tree of Agents’ Fixes Long-Context Blind Spots

TL;DR — Tree of Agents (TOA) splits very long documents into chunks, lets multiple agents read in different orders, shares evidence, prunes dead-ends, caches partial states, and then votes. The result: fewer hallucinations, resilience to the “lost in the middle” effect, and accuracy comparable to premium large models—while using a compact backbone. Why this matters for operators If your business parses contracts, annual reports, medical SOPs, or call-center transcripts, you’ve likely felt the pain of long-context LLMs: critical details buried mid-document get ignored; retrieval misses cross-paragraph logic; and bigger context windows inflate cost without guaranteeing better reasoning. TOA is a pragmatic middle path: it re-imposes structure on attention—not by scaling a single monolith, but by coordinating multiple lightweight readers with disciplined information exchange. ...

September 12, 2025 · 4 min · Zelina
Cover image

Pieces, Not Puzzles: How ArcMemo Turns LLM Reasoning into Reusable Skills

TL;DR Most memory systems hoard instances (queries, answers, snippets). ArcMemo instead distills concepts—compact, reusable abstractions of what a model learned while solving a problem. Those concepts are retrieved and recombined for new problems. On ARC‑AGI, this design beats strong no‑memory baselines and continues to scale with retries, showing a credible path to lifelong, test‑time learning without finetuning. Why this paper matters The status quo of “inference‑time scaling” is a treadmill: longer chains of thought today, amnesia tomorrow. Enterprises building agentic systems—customer ops copilots, finance/excel agents, or low‑code RPA flows—need their agents to keep what they learn and apply it later without weight updates. ArcMemo’s move from instance‑level to concept‑level memory is the right abstraction boundary: ...

September 8, 2025 · 5 min · Zelina
Cover image

Deep Queries, Fast Answers: Why ‘Deep Research’ Wants to Be Your New Analytics Runtime

TL;DR Deep Research agents are great at planning over messy data but bad at disciplined execution. Semantic-operator systems are the opposite: they execute efficiently but lack dynamic, cross-file reasoning. The Palimpzest prototype bridges the two with Context, compute/search operators, and materialized context reuse—a credible blueprint for an AI‑native analytics runtime over unstructured data. The Business Problem: Unstructured Data ≠ SQL Most companies still funnel PDFs, emails, HTML, and CSVs into brittle ETL or costly human review. Classic OLAP/SaaS BI stacks excel at structured aggregates, but stumble when a question spans dozens of noisy files (e.g., “What’s the 2024 vs 2001 identity‑theft ratio?”) or requires nuanced judgments (e.g., “Which Enron emails contain firsthand discussion of Raptor?”). Two current approaches each miss: ...

September 6, 2025 · 5 min · Zelina
Cover image

Prolog & Paycheck: When Tax AI Shows Its Work

TL;DR Neuro‑symbolic architecture (LLMs + Prolog) turns tax calculation from vibes to verifiable logic. The paper we analyze shows that adding a symbolic solver, selective refusal, and exemplar‑guided parsing can lower the break‑even cost of an AI tax assistant to a fraction of average U.S. filing costs. Even more interesting: chat‑tuned models often beat reasoning‑tuned models at few‑shot translation into logic — a counterintuitive result with big product implications. Why this matters for operators (not just researchers) Most back‑office finance work is a chain of (1) rules lookup, (2) calculations, and (3) audit trails. Generic LLMs are great at (1), decent at (2), and historically bad at (3). This work shows a practical path to auditable automation: translate rules and facts into Prolog, compute with a trusted engine, and price the risk of being wrong directly into your product economics. ...

August 31, 2025 · 5 min · Zelina
Cover image

Benchmarks with Benefits: What DeepScholar-Bench Really Measures

TL;DR DeepScholar-Bench introduces a live (continuously refreshable) benchmark and a holistic automated evaluation for generative research synthesis. Its reference pipeline, DeepScholar‑base, is simple yet competitive. The headline: today’s best systems organize text well but miss key facts, under-retrieve important sources, and fail verifiability at scale. That’s not a death knell—it’s a roadmap. Why this matters for business readers Enterprise “research copilots” promise to digest the live web, summarize options, and provide auditable citations. In practice, three gaps keep showing up: ...

August 30, 2025 · 5 min · Zelina
Cover image

RAGulating Compliance: When Triplets Trump Chunks

TL;DR A new multi‑agent pipeline builds an ontology‑light knowledge graph from regulatory text, embeds subject–predicate–object triplets alongside their source snippets in one vector store, and uses triplet‑level retrieval to ground LLM answers. The result: better section retrieval at stricter similarity thresholds, slightly higher answer accuracy, and far stronger navigability across related rules. For compliance teams, the payoff is auditability and explainability baked into the data layer, not just the prompt. ...

August 16, 2025 · 5 min · Zelina
Cover image

Breaking the Question Apart: How Compositional Retrieval Reshapes RAG Performance

In the world of Retrieval-Augmented Generation (RAG), most systems still treat document retrieval like a popularity contest — fetch the most relevant-looking text and hope the generator can stitch the answer together. But as any manager who has tried to merge three half-baked reports knows, relevance without completeness is a recipe for failure. A new framework, Compositional Answer Retrieval (CAR), aims to fix that. Instead of asking a retrieval model to find a single “best” set of documents, CAR teaches it to think like a strategist: break the question into its components, retrieve for each, and then assemble the pieces into a coherent whole. ...

August 11, 2025 · 3 min · Zelina
Cover image

Search When It Hurts: How UR² Teaches Models to Retrieve Only When Needed

Most “smart” RAG stacks are actually compulsive googlers: they fetch first and think later. UR² (“Unified RAG and Reasoning”) flips that reflex. It trains a model to reason by default and retrieve only when necessary, using reinforcement learning (RL) to orchestrate the dance between internal knowledge and external evidence. Why this matters for builders: indiscriminate retrieval is the silent cost center of LLM systems—extra latency, bigger bills, brittle answers. UR² shows a way to make retrieval selective, structured, and rewarded, yielding better accuracy on exams (MMLU‑Pro, MedQA), real‑world QA (HotpotQA, Bamboogle, MuSiQue), and even math. ...

August 11, 2025 · 5 min · Zelina