Cover image

Org Charts for Robots: What AgentArch Really Tells Us About Enterprise AI

If you’ve ever tried turning a clever chatbot into a reliable employee, you already know the pain: great demos, shaky delivery. AgentArch, a new enterprise-focused benchmark from ServiceNow, is the first study I’ve seen that tests combinations of agent design choices—single vs multi‑agent, ReAct vs function-calling, summary vs complete memory, and optional “thinking tools”—across two realistic workflows: a simple PTO process and a gnarly customer‑request router. The result is a cold shower for one‑size‑fits‑all playbooks—and a practical map for building systems that actually ship. ...

September 20, 2025 · 4 min · Zelina
Cover image

Tool Wars, Protocol Peace: What MCP‑AgentBench Really Measures

TL;DR MCP‑AgentBench is the first broad benchmark that evaluates language agents inside the Model Context Protocol (MCP) rather than with ad‑hoc function calls. It sets up 33 MCP servers with 188 tools and runs 600 goal‑oriented queries across six task patterns. Results flip a few assumptions: open‑source leaders (notably Qwen3‑235B‑A22B) can top the table under the ReAct style, while Claude 4 Sonnet shines with native tool‑calling. Token budgets matter: o3‑mini posts the best performance‑per‑token among big names. The meta‑lesson for builders: your agent’s interaction style must match the model and benchmarks must reward outcome, not ritual. ...

September 19, 2025 · 5 min · Zelina
Cover image

Model Portfolio: When LLMs Sit the CFA

If your firm is debating whether to trust an LLM on investment memos, this study is a gift: 1,560 questions from official CFA mock exams across Levels I–III, run on three model archetypes—multimodal generalist (GPT‑4o), deep-reasoning specialist (GPT‑o1), and lightweight cost‑saver (o3‑mini)—both zero‑shot and with a domain‑reasoning RAG pipeline. Below is what matters for adoption, not just leaderboard bragging rights. What the paper really shows Reasoning beats modality for finance. The reasoning‑optimized model (GPT‑o1) dominates across levels; the generalist (GPT‑4o) is inconsistent, especially on math‑heavy Level II. RAG helps where context is long and specialized. Gains are largest at Level III (portfolio cases) and in Fixed Income/Portfolio Management, modest at Level I. Retrieval cannot fix arithmetic. Most errors are knowledge gaps, not reading problems. Readability barely moves accuracy; the bottleneck is surfacing the right curriculum facts and applying them. Cost–accuracy has a sweet spot. o3‑mini + targeted RAG is strong enough for high‑volume workflows; o1 should be reserved for regulated, high‑stakes analysis. Executive snapshot CFA Level GPT‑4o (ZS → RAG) GPT‑o1 (ZS → RAG) o3‑mini (ZS → RAG) Takeaway I 78.6% → 79.4% 94.8% → 94.8% 87.6% → 88.3% Foundations already in‑model; RAG adds little II 59.6% → 60.5% 89.3% → 91.4% 79.8% → 84.3% Level II exposes math + integration gaps; RAG helps smaller models most III 64.1% → 68.6% 79.1% → 87.7% 70.9% → 76.4% Case‑heavy; RAG is decisive, especially for o1 ZS = zero‑shot. Accuracies are from the paper’s aggregated results. ...

September 11, 2025 · 4 min · Zelina
Cover image

Benchmarks with Benefits: What DeepScholar-Bench Really Measures

TL;DR DeepScholar-Bench introduces a live (continuously refreshable) benchmark and a holistic automated evaluation for generative research synthesis. Its reference pipeline, DeepScholar‑base, is simple yet competitive. The headline: today’s best systems organize text well but miss key facts, under-retrieve important sources, and fail verifiability at scale. That’s not a death knell—it’s a roadmap. Why this matters for business readers Enterprise “research copilots” promise to digest the live web, summarize options, and provide auditable citations. In practice, three gaps keep showing up: ...

August 30, 2025 · 5 min · Zelina
Cover image

USB‑C for Agents, Stress‑Tested: What MCP‑Universe Really Reveals

The pitch: a unified plug—and a tougher test The Model Context Protocol (MCP) is often described as the “USB‑C of AI tools”: one standardized way for agents to talk to external services (maps, finance data, browsers, repos, etc.). MCP‑Universe, a new benchmark from Salesforce AI Research, finally stress‑tests that idea with real MCP servers rather than toy mocks. It derives success from execution outcomes, not multiple‑choice guesswork—exactly what enterprises need to trust automation. ...

August 23, 2025 · 4 min · Zelina
Cover image

Patch Tuesday for the Law: Hunting Legal Zero‑Days in AI Governance

TL;DR: Legal zero‑days are previously unnoticed faults in how laws interlock. When triggered, they can invalidate decisions, stall regulators, or nullify safeguards immediately—no lawsuit required. A new evaluation finds current AI models only occasionally detect such flaws, but the capability is measurable and likely to grow. Leaders should treat statutory integrity like cybersecurity: threat model, red‑team, patch. What’s a “legal zero‑day”? Think of a software zero‑day, but in law. It’s not a vague “loophole,” nor normal jurisprudential drift. It’s a precise, latent defect in how definitions, scope clauses, or cross‑references interact such that real‑world effects fire at once when someone notices—e.g., eligibility rules void an officeholder, or a definitional tweak quietly de‑scopes entire compliance obligations. ...

August 18, 2025 · 4 min · Zelina
Cover image

Kill Switch Ethics: What the PacifAIst Benchmark Really Measures

TL;DR PacifAIst stress‑tests a model’s behavioral alignment when its instrumental goals (self‑preservation, resources, or task completion) conflict with human safety. In 700 text scenarios across three sub‑domains (EP1 self‑preservation vs. human safety, EP2 resource conflict, EP3 goal preservation vs. evasion), leading LLMs show meaningful spread in a “Pacifism Score” (P‑Score) and refusal behavior. Translation for buyers: model choice, policies, and guardrails should not assume identical safety under conflict—they aren’t. Why this matters now Most safety work measures what models say (toxicity, misinformation). PacifAIst measures what they would do when a safe choice may require self‑sacrifice—e.g., dumping power through their own servers to prevent a human‑harmful explosion. That’s closer to agent operations (automation, tool use, and control loops) than classic content benchmarks. If you’re piloting computer‑use agents or workflow copilots with action rights, this is the missing piece in your risk model. ...

August 16, 2025 · 5 min · Zelina
Cover image

Meta-Game Theory: What a Pokémon League Taught Us About LLM Strategy

When language models battle, their strategies talk back. In a controlled Pokémon tournament, eight LLMs drafted teams, chose moves, and logged natural‑language rationales every turn. Beyond win–loss records, those explanations exposed how models reason about uncertainty, risk, and resource management—exactly the traits we want in enterprise decision agents. Why Pokémon is a serious benchmark (yes, really) Pokémon delivers the trifecta we rarely get in classic AI games: Structured complexity: 18 interacting types, clear multipliers, and crisp rules. Uncertainty that matters: imperfect information, status effects, and accuracy trade‑offs. Resource management: limited switches, finite HP, role specialization. Crucially, the action space is compact enough for language-first agents to reason step‑by‑step without search trees—so we can see the strategy, not just the score. ...

August 9, 2025 · 4 min · Zelina
Cover image

FAITH in Numbers: Stress-Testing LLMs Against Financial Hallucinations

Financial AI promises speed and scale — but in finance, a single misplaced digit can be the difference between compliance and catastrophe. The FAITH (Framework for Assessing Intrinsic Tabular Hallucinations) benchmark tackles this risk head‑on, probing how well large language models can faithfully extract and compute numbers from the dense, interconnected tables in 10‑K filings. From Idea to Dataset: Masking With a Purpose FAITH reframes hallucination detection as a context‑aware masked span prediction task. It takes real S&P 500 annual reports, hides specific numeric spans, and asks the model to recover them — but only after ensuring three non‑negotiable conditions: ...

August 8, 2025 · 3 min · Zelina
Cover image

Branching Out, Beating Down: Why Trees Still Outgrow Deep Roots in Quant AI

In the age of Transformers and neural nets that write poetry, it’s tempting to assume deep learning dominates every corner of AI. But in quantitative investing, the roots tell a different story. A recent paper—QuantBench: Benchmarking AI Methods for Quantitative Investment1—delivers a grounded reminder: tree-based models still outperform deep learning (DL) methods across key financial prediction tasks. ...

April 30, 2025 · 7 min