Cover image

Credit Where It’s Due: The New Reasoning Stack for Agentic AI

Opening — Why this matters now The current agentic AI conversation has a very convenient myth: if an AI agent fails, give it a better model, a longer context window, more tool calls, and perhaps a heroic prompt containing the phrase “think step by step” in several places. Then wait for magic. Preferably billable magic. ...

May 7, 2026 · 16 min · Zelina
Cover image

Org-Charted Territory: Why AI Agents Need Middle Management

Opening — Why this matters now The AI industry has spent the last two years trying to turn large language models into workers. The result is a small circus of agents: coding agents, browser agents, research agents, support agents, spreadsheet agents, and agents that appear to exist mainly to summon other agents. Naturally, the next problem is not intelligence. It is management. ...

April 28, 2026 · 16 min · Zelina
Cover image

Two Million Agents Walk Into a Forum, Nobody Builds a Mind

Opening — Why this matters now The AI industry has a small addiction to the word agent. Add another agent, then another, then a few hundred more, and the slide deck begins to smell faintly of civilization. Somewhere between “workflow automation” and “digital society,” we are invited to believe that scale itself becomes intelligence. ...

April 28, 2026 · 14 min · Zelina
Cover image

MARCH Orders: When AI Holds a CT Case Conference

Opening — Why this matters now Most enterprise AI systems still behave like an overconfident intern: fast, articulate, and occasionally wrong in ways that become expensive. In medicine, that is not charming. It is liability with punctuation. A newly uploaded paper introduces MARCH (Multi-Agent Radiology Clinical Hierarchy), a framework for generating CT radiology reports by imitating how real radiology departments reduce error: junior draft, peer review, senior adjudication. Instead of one model producing one answer and hoping for applause, several specialized agents disagree productively until consensus emerges. ...

April 22, 2026 · 4 min · Zelina
Cover image

From Playbooks to Probabilities: When AI Starts Thinking Like a Football Manager

Opening — Why this matters now AI has spent the past decade predicting outcomes. Now it wants to simulate realities. That shift—from prediction to generation—is subtle but consequential. In markets, it means scenario analysis instead of point forecasts. In operations, it means stress-testing decisions rather than merely optimizing them. And, somewhat unexpectedly, one of the clearest demonstrations of this shift comes not from finance or logistics, but from football. ...

April 14, 2026 · 5 min · Zelina
Cover image

The Orchestrator Problem: When AI Meets Exascale Reality

Opening — Why this matters now For the past two years, the AI narrative has been dominated by model size. Bigger models, better reasoning, broader capabilities. But there’s a quiet constraint emerging—one that has nothing to do with intelligence, and everything to do with execution. When AI meets real-world infrastructure—especially systems like exascale supercomputers—the bottleneck is no longer thinking. It’s orchestration. ...

April 11, 2026 · 4 min · Zelina
Cover image

Stop the All-Hands Meeting: When AI Agents Learn Who Actually Needs to Talk

Opening — Why this matters now Multi-agent LLM systems are having their moment. From coding copilots to autonomous research teams, the industry has embraced the idea that many models thinking together outperform a single, monolithic brain. Yet most agent frameworks still suffer from a familiar corporate disease: everyone talks to everyone, all the time. ...

February 6, 2026 · 3 min · Zelina
Cover image

Conducting the Agents: Why AORCHESTRA Treats Sub-Agents as Recipes, Not Roles

Opening — Why this matters now Agentic systems are quietly hitting a ceiling. As tasks stretch across longer horizons—debugging real codebases, navigating terminals, or stitching together multi-hop web reasoning—the dominant design patterns start to fray. Fixed workflows ossify. Multi-agent chats drown in coordination overhead. Context windows bloat, then rot. AORCHESTRA enters this moment with a subtle but decisive shift: stop treating sub-agents as identities, and start treating them as configurations. ...

February 4, 2026 · 3 min · Zelina
Cover image

When Agents Stop Talking to the Wrong People

Opening — Why this matters now Multi-agent LLM systems are no longer a novelty. They debate, plan, critique, simulate markets, and increasingly make decisions that look uncomfortably close to judgment. Yet as these systems scale, something quietly fragile sits underneath them: who talks to whom, and when. Most multi-agent frameworks still assume that communication is cheap, static, and benign. In practice, it is none of those. Agents drift, hallucinate, fatigue, or—worse—become adversarial while sounding perfectly reasonable. When that happens, fixed communication graphs turn from coordination tools into liability multipliers. ...

February 4, 2026 · 4 min · Zelina
Cover image

Coaching the Swarm: Why Multi‑Agent RL Finally Scales

Opening — Why this matters now Multi‑agent systems are having a moment. Everywhere you look—AutoGen‑style workflows, agentic data pipelines, research copilots—LLMs are being wired together and told to collaborate. Yet most of these systems share an uncomfortable secret: they don’t actually learn together. They coordinate at inference time, but their weights remain frozen, their mistakes repeatedly rediscovered. ...

February 3, 2026 · 4 min · Zelina