Cover image

When More Becomes Smarter: The Unreasonable Effectiveness of Scaling Agents

From repetition to reasoning When early computer-use agents (CUAs) appeared, they promised to automate tedious digital workflows—clicking through files, formatting reports, or organizing spreadsheets. Yet anyone who has tried them knows the frustration: sometimes they succeed spectacularly, sometimes they click the wrong button and crash everything. Reliability, not intelligence, has been the missing link. A recent paper from Simular Research, “The Unreasonable Effectiveness of Scaling Agents for Computer Use,” shows that scaling these agents isn’t just about more compute—it’s about how we scale. Their method, Behavior Best-of-N (bBoN), turns the brute-force idea of “run many agents and hope one works” into a structured, interpretable, and near-human-level solution. ...

October 9, 2025 · 3 min · Zelina
Cover image

Backtrack to Breakthrough: Why Great AI Agents Revisit

TL;DR Agentic performance isn’t just about doing more; it’s about going back. In GSM-Agent—a controllable, tool-using version of GSM8K—top models only reach ~65–68% accuracy, and the strongest predictor of success is a high revisit ratio: deliberately returning to a previously explored topic with a refined query. That’s actionable for enterprise AI: design agents that can (1) recognize incomplete evidence, (2) reopen earlier lines of inquiry, and (3) instrument and reward revisits. ...

October 3, 2025 · 4 min · Zelina
Cover image

Reason, Reveal, Resist: The Persuasion Duality in Multi‑Agent AI

TL;DR In LLM multi‑agent systems, how a model thinks matters more than how big it is. Explicit reasoning (thinking mode / CoT) creates a Persuasion Duality: sharing a model’s reasoning makes it far better at convincing others, while enabling the model’s own reasoning mode makes it far harder to convince. This shifts best practices for agent design, governance, and product UX. Why this paper matters Cognition—not just parameter count—now drives the social dynamics of agent swarms. For Cognaptus clients building agent workers (ops, compliance, research, trading), the result is practical: toggling reasoning changes not just accuracy, but influence. Your deployment choices can tilt a network toward consensus, stalemate, or resilient truth‑seeking. ...

October 2, 2025 · 5 min · Zelina
Cover image

Branching Out of the Box: Tree‑OPO Turns MCTS Traces into Better RL for Reasoning

The punchline Tree‑OPO takes something many labs already produce—MCTS rollouts from a stronger teacher—and treats them not just as answers but as a curriculum of prefixes. It then optimizes a student with GRPO-like updates, but with staged, tree-aware advantages instead of a flat group mean. The result in math reasoning (GSM8K) is a modest but consistent bump over standard GRPO while keeping memory/complexity low. Why this matters for practitioners: you can get more out of your expensive searches (or teacher traces) without training a value model or lugging around teacher logits during student training. ...

September 17, 2025 · 5 min · Zelina
Cover image

Parallel Minds, Shorter Time: ParaThinker’s Native Thought Width

The pitch: We’ve stretched LLM “depth” by making models think longer. ParaThinker flips the axis—training models to think wider: spawn several independent lines of thought in parallel and then fuse them. The result is higher accuracy than single‑path “long thinking” at roughly the same wall‑clock time—and it scales. TL;DR for operators What it is: An end‑to‑end framework that natively generates multiple reasoning paths with special control tokens, then summarizes using cached context. Why it matters: It tackles the test‑time scaling bottleneck (aka Tunnel Vision) where early tokens lock a model into a suboptimal path. Business takeaway: You can trade a bit of GPU memory for more stable, higher‑quality answers at nearly the same latency—especially on math/logic‑heavy tasks and agentic workflows. The problem: “Think longer” hits a wall Sequential test‑time scaling (à la o1 / R1‑style longer CoT) delivers diminishing returns. After a point, more tokens don’t help; they reinforce early mistakes. ParaThinker names this failure mode Tunnel Vision—the first few tokens bias the entire trajectory. If depth traps us, width can free us. ...

September 11, 2025 · 4 min · Zelina
Cover image

Razor Burn: Why LLMs Nick Themselves on Induction and Abduction

TL;DR A new synthetic benchmark (INABHYD) tests inductive and abductive reasoning under Occam’s Razor. LLMs handle toy cases but falter as ontologies deepen or when multiple hypotheses are needed. Even when models “explain” observations, they often pick needlessly complex or trivial hypotheses—precisely the opposite of what scientific discovery and root-cause analysis require. The Big Idea Most reasoning work on LLMs obsesses over deduction (step-by-step proofs). But the real world demands induction (generalize rules) and abduction (best explanation). The paper introduces INABHYD, a programmable benchmark that builds fictional ontology trees (concepts, properties, subtype links) and hides some axioms. The model sees an incomplete world + observations, and must propose hypotheses that both explain all observations and do so parsimoniously (Occam’s Razor). The authors score: ...

September 6, 2025 · 4 min · Zelina
Cover image

Judge, Jury, and Chain‑of‑Thought: Making Models StepWiser

TL;DR Generative judges that think before they judge—and are trained with online RL using stepwise labels—beat classic discriminative process reward models (PRMs). The StepWiser approach brings three wins: (1) higher accuracy at spotting the first bad step, (2) cleaner, more reliable inference via a “chunk‑reset” search that prunes bad steps while keeping overall length similar, and (3) better data selection for fine‑tuning. Why this matters (for builders and buyers) Most enterprise CoT systems fail not because they can’t produce long reasoning, but because they can’t police their own steps. Traditional PRMs act like a yes/no bouncer at each step—fast, but shallow. StepWiser reframes judging as its own reasoning task: the judge writes an analysis first, then issues a verdict. That small shift has big, practical consequences: ...

August 27, 2025 · 4 min · Zelina
Cover image

Put It on the GLARE: How Agentic Reasoning Makes Legal AI Actually Think

Legal judgment prediction (LJP) is one of those problems that exposes the difference between looking smart and being useful. Most models memorize patterns; judges demand reasons. Today’s paper introduces GLARE—an agentic framework that forces the model to widen its hypothesis space, learn from real precedent logic, and fetch targeted legal knowledge only when it needs it. The result isn’t just higher accuracy; it’s a more auditable chain of reasoning. TL;DR What it is: GLARE = Gent Legal Agentic Reasoning Engine for LJP. Why it matters: It turns “guess the label” into compare-and-justify—exactly how lawyers reason. How it works: Three modules—Charge Expansion (CEM), Precedents Reasoning Demonstrations (PRD), and Legal Search–Augmented Reasoning (LSAR)—cooperate in a loop. Proof: Gains of +7.7 F1 (charges) and +11.5 F1 (articles) over direct reasoning; +1.5 to +3.1 F1 over strong precedent‑RAG; double‑digit gains on difficult, long‑tail charges. So what: If you’re deploying LLMs into legal ops or compliance, agentic structure > bigger base model. Why “agentic” beats bigger The usual upgrades—bigger models, more RAG, longer context—don’t address the core failure mode in LJP: premature closure on a familiar charge and surface‑level precedent matching. GLARE enforces a discipline: ...

August 25, 2025 · 4 min · Zelina
Cover image

Knows the Facts, Misses the Plot: LLMs’ Knowledge–Reasoning Split in Clinical NLI

The gist A new clinical natural language inference (NLI) benchmark isolates what models know from how they reason—and the results are stark. State‑of‑the‑art LLMs ace targeted fact checks (≈92% accuracy) but crater on the actual reasoning tasks (≈25% accuracy). The collapse is most extreme in compositional grounding (≈4% accuracy), where a claim depends on multiple interacting clinical constraints (e.g., drug × dose × diagnosis × schedule). Scaling yielded fluent prose, not reliable inference. ...

August 18, 2025 · 4 min · Zelina
Cover image

Search When It Hurts: How UR² Teaches Models to Retrieve Only When Needed

Most “smart” RAG stacks are actually compulsive googlers: they fetch first and think later. UR² (“Unified RAG and Reasoning”) flips that reflex. It trains a model to reason by default and retrieve only when necessary, using reinforcement learning (RL) to orchestrate the dance between internal knowledge and external evidence. Why this matters for builders: indiscriminate retrieval is the silent cost center of LLM systems—extra latency, bigger bills, brittle answers. UR² shows a way to make retrieval selective, structured, and rewarded, yielding better accuracy on exams (MMLU‑Pro, MedQA), real‑world QA (HotpotQA, Bamboogle, MuSiQue), and even math. ...

August 11, 2025 · 5 min · Zelina