Cover image

Pods over Prompts: Shachi’s Playbook for Serious Agent-Based Simulation

TL;DR Shachi is a modular methodology for building LLM-driven agent-based models (ABMs) that replaces ad‑hoc prompt spaghetti with four standardized cognitive components—Configs, Memory, Tools, and an LLM reasoning core. The result: agents you can port across environments, benchmark rigorously, and use to study nontrivial dynamics like tariff shocks with externally valid outcomes. For enterprises, Shachi is the missing method for turning agent demos into decision simulators. Why this paper matters to operators (not just researchers) Most enterprise “agent” pilots die in the gap between a clever demo and a reliable simulator that leaders can trust for planning. Shachi closes that gap by: ...

October 3, 2025 · 5 min · Zelina
Cover image

Tool Wars, Protocol Peace: What MCP‑AgentBench Really Measures

TL;DR MCP‑AgentBench is the first broad benchmark that evaluates language agents inside the Model Context Protocol (MCP) rather than with ad‑hoc function calls. It sets up 33 MCP servers with 188 tools and runs 600 goal‑oriented queries across six task patterns. Results flip a few assumptions: open‑source leaders (notably Qwen3‑235B‑A22B) can top the table under the ReAct style, while Claude 4 Sonnet shines with native tool‑calling. Token budgets matter: o3‑mini posts the best performance‑per‑token among big names. The meta‑lesson for builders: your agent’s interaction style must match the model and benchmarks must reward outcome, not ritual. ...

September 19, 2025 · 5 min · Zelina
Cover image

From Blobs to Blocks: Componentizing LLM Output for Real Work

TL;DR Most LLM tools hand you a blob. Componentization treats an answer as parts—headings, paragraphs, code blocks, steps, or JSON subtrees—with stable IDs and links. You can edit, switch on/off, or regenerate any part, then recompose the final artifact. In early tests, this aligns with how teams actually work: outline first, keep the good bits, surgically fix the bad ones, and reuse components across docs. It’s a small idea with big downstream benefits for control, auditability, and collaboration. ...

September 14, 2025 · 5 min · Zelina
Cover image

HyFedRAG: Caching Privacy into Federated RAG

Centralized Retrieval-Augmented Generation (RAG) systems promise smarter answers, but they quietly assume one big, clean dataset in one place. Reality is far messier: hospitals, insurers, or financial groups each hold their own silo, often in incompatible formats, and none are willing—or legally allowed—to pool raw data. The HyFedRAG framework tackles this head‑on by making RAG federated, heterogeneous, and privacy‑aware. Edge First, Cloud Second Instead of centralizing records, HyFedRAG runs retrieval at the edge. Each hospital or business unit: ...

September 12, 2025 · 3 min · Zelina
Cover image

Judgment Day for RAG: How L‑MARS Cuts Legal Hallucinations by Design

TL;DR — L‑MARS replaces single‑pass RAG with a judge‑in‑the‑loop multi‑agent workflow that iteratively searches, checks sufficiency (jurisdiction, date, authority), and only then answers. On a 200‑question LegalSearchQA benchmark of current‑year questions, it reports major gains vs. pure LLMs, at the cost of latency. For regulated industries, the architecture—not just the model—does the heavy lifting. What’s actually new here Most legal QA failures aren’t from weak language skills—they’re from missing or outdated authority. L‑MARS tackles this with three design commitments: ...

September 4, 2025 · 4 min · Zelina
Cover image

Numbers Need Narration: Making LLMs Do Reasoning‑Intensive Regression

Thesis: When the job is to read text, reason carefully, and return a precise number (not just a label), ordinary regression heads and vanilla prompting often fail in opposite ways. The paper introduces MENTAT, a lightweight recipe that marries batch‑reflective prompt evolution with a small MLP aggregator over multiple LLM rollouts. The result: tighter calibration and better ranking on tasks where each example demands real reasoning, not surface features. What counts as “Reasoning‑Intensive Regression” (RiR)? RiR tasks look like this: the model must (1) think through the input with step‑wise analysis, and then (2) score it on a real‑valued scale. The paper frames three such tasks: ...

September 1, 2025 · 4 min · Zelina
Cover image

Stackelbergs & Stakeholders: Turning Bits into Boardroom Moves

TL;DR: BusiAgent proposes a client‑centric, multi‑agent LLM framework that formalizes roles (CEO/CFO/CTO/MM/PM) with an extended Continuous‑Time MDP, coordinates them via entropy‑guided brainstorming (peer‑level) and multi‑level Stackelberg games (vertical), and squeezes extra performance from contextual Thompson sampling for prompt optimization—wrapped in a QA stack that fuses STM/LTM memories with a knowledge base. It’s a serious attempt to connect granular analytics to boardroom decisions. The big win is organizational alignment; the big risks are evaluation rigor, token economics, and ops reliability at scale. ...

August 24, 2025 · 5 min · Zelina
Cover image

Click Less, Do More: Why API-GUI + RL Could Finally Make Desktop Agents Useful

The gist (and why it matters for business) Enterprise buyers don’t reward demos; they reward repeatable completions per dollar. ComputerRL proposes a path to that by (1) escaping pure GUI mimicry via a machine-first API-GUI action space, (2) scaling online RL across thousands of Ubuntu VMs, and (3) preventing policy entropy collapse with Entropulse—a cadence that alternates RL and supervised fine-tuning (SFT) on successful rollouts. The result: a reported 48.1% OSWorld success with markedly fewer steps than GUI-only agents. Translation for buyers: lower latency, lower cost, higher reliability. ...

August 20, 2025 · 5 min · Zelina
Cover image

Memory With Intent: Why LLMs Need a Cognitive Workspace, Not Just a Bigger Window

TL;DR Today’s long-context and RAG systems scale storage, not thinking. Cognitive Workspace (CW) reframes memory as an active, metacognitive process: curate, plan, reuse, and consolidate. In tests, CW reports ~55–60% memory reuse and 17–18% net efficiency gains despite a 3.3× operation overhead—precisely because it thinks about what to remember and why. The Setup: Context ≠ Cognition Over the past 18 months we’ve cheered >1M-token windows and slicker attention kernels. But piling tokens into a context is like dumping files on a desk; it’s storage without stewardship. In knowledge work, what moves the needle is not how much you can “see” but how well you organize, recall, and reuse—with intent. ...

August 20, 2025 · 5 min · Zelina
Cover image

Search When It Hurts: How UR² Teaches Models to Retrieve Only When Needed

Most “smart” RAG stacks are actually compulsive googlers: they fetch first and think later. UR² (“Unified RAG and Reasoning”) flips that reflex. It trains a model to reason by default and retrieve only when necessary, using reinforcement learning (RL) to orchestrate the dance between internal knowledge and external evidence. Why this matters for builders: indiscriminate retrieval is the silent cost center of LLM systems—extra latency, bigger bills, brittle answers. UR² shows a way to make retrieval selective, structured, and rewarded, yielding better accuracy on exams (MMLU‑Pro, MedQA), real‑world QA (HotpotQA, Bamboogle, MuSiQue), and even math. ...

August 11, 2025 · 5 min · Zelina