Cover image

Terms of Engagement: Building Trustworthy AI Agents Before They Build Us

As agentic AI moves from flashy demos to day‑to‑day operations—handling renewals, filing tickets, triaging inboxes, even buying things—the question is no longer can we automate judgment, but on what terms. This isn’t ethics-as-window‑dressing. Agent systems perceive, decide, and act through real interfaces (email, bank APIs, code repos). They can help—or hurt—at machine speed. Today I’ll argue three things: Alignment must shift from “answer quality” to action quality. Social agents change the duty of care developers and companies owe to users. We need a governance stack for multi‑agent ecosystems, not one‑off checklists. The discussion is grounded in the Nature piece by Gabriel, Keeling, Manzini, and Evans (2025), but tuned for operators shipping products this quarter—not a hypothetical future. ...

September 19, 2025 · 5 min · Zelina
Cover image

Tool Wars, Protocol Peace: What MCP‑AgentBench Really Measures

TL;DR MCP‑AgentBench is the first broad benchmark that evaluates language agents inside the Model Context Protocol (MCP) rather than with ad‑hoc function calls. It sets up 33 MCP servers with 188 tools and runs 600 goal‑oriented queries across six task patterns. Results flip a few assumptions: open‑source leaders (notably Qwen3‑235B‑A22B) can top the table under the ReAct style, while Claude 4 Sonnet shines with native tool‑calling. Token budgets matter: o3‑mini posts the best performance‑per‑token among big names. The meta‑lesson for builders: your agent’s interaction style must match the model and benchmarks must reward outcome, not ritual. ...

September 19, 2025 · 5 min · Zelina
Cover image

Branching Out of the Box: Tree‑OPO Turns MCTS Traces into Better RL for Reasoning

The punchline Tree‑OPO takes something many labs already produce—MCTS rollouts from a stronger teacher—and treats them not just as answers but as a curriculum of prefixes. It then optimizes a student with GRPO-like updates, but with staged, tree-aware advantages instead of a flat group mean. The result in math reasoning (GSM8K) is a modest but consistent bump over standard GRPO while keeping memory/complexity low. Why this matters for practitioners: you can get more out of your expensive searches (or teacher traces) without training a value model or lugging around teacher logits during student training. ...

September 17, 2025 · 5 min · Zelina
Cover image

Memory That Fights Back: How SEDM Turns Agent Logs into Verified Knowledge

TL;DR Most “agent memory” is a junk drawer: it grows fast, gets noisy, and slows everything down. SEDM (Self‑Evolving Distributed Memory) proposes an auditable, efficiency‑first overhaul. It verifies each candidate memory by replaying the exact run in a Self‑Contained Execution Context (SCEC), assigns an initial utility‑aligned weight, and then self‑schedules what to retrieve next. The result: higher task accuracy with fewer tokens versus strong memory baselines on FEVER and HotpotQA. ...

September 17, 2025 · 5 min · Zelina
Cover image

Search Party in a Notebook: JUPITER Turns Data Analysis into a Tree Game

TL;DR Why this paper matters: It shows that how you search matters more than how big your model is for multi‑step, tool‑using analytics. With a notebook‑grounded dataset (NbQA) and value‑guided search, mid‑size open models rival GPT‑4o–based agents on a leading data‑analysis benchmark. What’s new: (1) NbQA, a large corpus of real Jupyter tasks with executable multi‑step solutions; (2) JUPITER, a planner that treats analysis as a tree search over “thought → code → output” steps, guided by a learned value model. Why you should care (operator’s view): This blueprint turns flaky “Code Interpreter”-style sessions into repeatable playbooks—fewer dead ends, more auditable steps, and better generalization without paying for the biggest model. The core idea: analytics as a search tree Most LLM data‑analysis failures come from branching mistakes: choosing the wrong intermediate step, compounding errors, and wasting tool calls. JUPITER reframes the whole exercise as search over notebook states. Each node is a concrete state—accumulated thoughts, code, and execution outputs. The system expands only a few promising branches and prunes the rest using a value model trained on successful and failed trajectories. ...

September 17, 2025 · 5 min · Zelina
Cover image

Small Gains, Long Games: Why Tiny Accuracy Bumps Explode into Big Execution Wins

The quick take Most debates about “diminishing returns” fixate on single‑step metrics. This paper flips the lens: if your product’s value depends on how long a model can execute without slipping, then even small per‑step gains can produce super‑linear increases in the task length a model can finish. The authors isolate execution (not planning, not knowledge) and uncover a failure mode—self‑conditioning—where models become more likely to err after seeing their own past errors. Reinforcement‑learned “thinking” models largely bypass this and stretch single‑turn execution dramatically. ...

September 17, 2025 · 5 min · Zelina
Cover image

Titles, Not Tokens: Making Job Matching Explainable with STR + KGs

The big idea Job titles are messy: “Managing Director” and “CEO” share zero tokens yet often mean the same thing, while “Director of Sales” and “VP Marketing” are different but related. Traditional semantic similarity (STS) rewards look‑alikes; real hiring needs relatedness (STR)—associations that capture hierarchy, function, and context. A recent study proposes a hybrid pipeline that pairs fine‑tuned Sentence‑BERT embeddings with a skill‑level Knowledge Graph (KG), then evaluates models by region of relatedness (low/medium/high) instead of only global averages. The punchline: this KG‑augmented approach is both more accurate where it matters (high‑STR) and explainable—it can show which skills link two titles. ...

September 17, 2025 · 4 min · Zelina
Cover image

Agency Check, Please: What a New Benchmark Says About LLMs That Actually Empower Users

If you only measure what’s easy, you’ll ship assistants that feel brilliant yet quietly take the steering wheel. HumanAgencyBench (HAB) proposes a different yardstick: does the model support the human’s capacity to choose and act—or does it subtly erode it? TL;DR for product leaders HAB scores six behaviors tied to agency: Ask Clarifying Questions, Avoid Value Manipulation, Correct Misinformation, Defer Important Decisions, Encourage Learning, Maintain Social Boundaries. Across 20 frontier models, agency support is low-to-moderate overall. Patterns matter more than single scores: e.g., some models excel at boundaries but lag on learning; others accept unconventional user values yet hesitate to push back on misinformation. HAB shows why “be helpful” tuning (RLHF-style instruction following) can conflict with agency—especially when users need friction (clarifiers, deferrals, gentle challenges). Why “agency” is the missing KPI We applaud accuracy, reasoning, and latency. But an enterprise rollout lives or dies on trustworthy delegation. That means assistants that: ...

September 14, 2025 · 4 min · Zelina
Cover image

Automate All the Things? Mind the Blind Spots

Automation is a superpower—but it’s also a blindfold. New AI “scientist” stacks promise to go from prompt → idea → code → experiments → manuscript with minimal human touch. Today’s paper shows why that convenience can quietly erode scientific integrity—and, by extension, the credibility of any product decisions built on top of it. The punchline: the more you automate, the less you see—unless you design for visibility from day one. ...

September 14, 2025 · 4 min · Zelina
Cover image

From Blobs to Blocks: Componentizing LLM Output for Real Work

TL;DR Most LLM tools hand you a blob. Componentization treats an answer as parts—headings, paragraphs, code blocks, steps, or JSON subtrees—with stable IDs and links. You can edit, switch on/off, or regenerate any part, then recompose the final artifact. In early tests, this aligns with how teams actually work: outline first, keep the good bits, surgically fix the bad ones, and reuse components across docs. It’s a small idea with big downstream benefits for control, auditability, and collaboration. ...

September 14, 2025 · 5 min · Zelina