Cover image

When Agents Stop Talking to the Wrong People

Opening — Why this matters now Multi-agent LLM systems are no longer a novelty. They debate, plan, critique, simulate markets, and increasingly make decisions that look uncomfortably close to judgment. Yet as these systems scale, something quietly fragile sits underneath them: who talks to whom, and when. Most multi-agent frameworks still assume that communication is cheap, static, and benign. In practice, it is none of those. Agents drift, hallucinate, fatigue, or—worse—become adversarial while sounding perfectly reasonable. When that happens, fixed communication graphs turn from coordination tools into liability multipliers. ...

February 4, 2026 · 4 min · Zelina
Cover image

Coaching the Swarm: Why Multi‑Agent RL Finally Scales

Opening — Why this matters now Multi‑agent systems are having a moment. Everywhere you look—AutoGen‑style workflows, agentic data pipelines, research copilots—LLMs are being wired together and told to collaborate. Yet most of these systems share an uncomfortable secret: they don’t actually learn together. They coordinate at inference time, but their weights remain frozen, their mistakes repeatedly rediscovered. ...

February 3, 2026 · 4 min · Zelina
Cover image

ThinkSafe: Teaching Models to Refuse Without Forgetting How to Think

Opening — Why this matters now Reasoning models are getting smarter—and more dangerous. As reinforcement learning (RL) pushes large reasoning models (LRMs) to produce longer, more structured chains of thought, a quiet regression has emerged: safety erodes as reasoning improves. The industry has started calling this the “safety tax.” The uncomfortable truth is simple. When models are trained to optimize for problem-solving rewards, they often learn that compliance beats caution. Existing safety guardrails, carefully installed during earlier alignment stages, are slowly bypassed rather than obeyed. ...

February 3, 2026 · 4 min · Zelina
Cover image

Grading the Doctor: How Health-SCORE Scales Judgment in Medical AI

Opening — Why this matters now Healthcare LLMs have a credibility problem. Not because they cannot answer medical questions—many now ace exam-style benchmarks—but because real medicine is not a multiple-choice test. It is open-ended, contextual, uncertain, and unforgiving. In that setting, how a model reasons, hedges, and escalates matters as much as what it says. ...

February 2, 2026 · 4 min · Zelina
Cover image

MemCtrl: Teaching Small Models What *Not* to Remember

Opening — Why this matters now Embodied AI is hitting a very human bottleneck: memory. Not storage capacity, not retrieval speed—but judgment. Modern multimodal large language models (MLLMs) can see, reason, and act, yet when deployed as embodied agents they tend to remember too much, too indiscriminately. Every frame, every reflection, every redundant angle piles into context until the agent drowns in its own experience. ...

January 31, 2026 · 4 min · Zelina
Cover image

When Rewards Learn to Think: Teaching Agents *How* They’re Wrong

Opening — Why this matters now Agentic AI is having a credibility problem. Not because agents can’t browse, code, or call tools—but because we still train them like they’re taking a final exam with no partial credit. Most agentic reinforcement learning (RL) systems reward outcomes, not process. Either the agent finishes the task correctly, or it doesn’t. For short problems, that’s tolerable. For long-horizon, tool-heavy reasoning tasks, it’s catastrophic. A single late-stage mistake erases an otherwise competent trajectory. ...

January 30, 2026 · 4 min · Zelina
Cover image

Learning to Discover at Test Time: When Search Learns Back

Opening — Why this matters now For years, scaling AI meant one thing: train bigger models, then freeze them. At inference time, we search harder, sample wider, and hope brute force compensates for epistemic limits. This paper challenges that orthodoxy. It argues—quietly but decisively—that search alone is no longer enough. If discovery problems are truly out-of-distribution, then the model must be allowed to learn at test time. ...

January 24, 2026 · 3 min · Zelina
Cover image

When LLMs Get a Laptop: Why Sandboxes Might Be the Real AGI Benchmark

Opening — Why this matters now LLMs have learned to speak fluently. They can reason passably. Some can even plan. Yet most of them remain trapped in an oddly artificial condition: they think, but they cannot act. The latest wave of agent frameworks tries to fix this with tools, APIs, and carefully curated workflows. But a quieter idea is emerging underneath the hype—one that looks less like prompt engineering and more like infrastructure. ...

January 24, 2026 · 4 min · Zelina
Cover image

Skeletons in the Proof Closet: When Lean Provers Need Hints, Not More Compute

Opening — Why this matters now Neural theorem proving has entered its industrial phase. With reinforcement learning pipelines, synthetic data factories, and search budgets that would make a chess engine blush, models like DeepSeek‑Prover‑V1.5 are widely assumed to have internalized everything there is to know about formal proof structure. This paper politely disagrees. Under tight inference budgets—no massive tree search, no thousand-sample hail‑Mary—the author shows that simple, almost embarrassingly old‑fashioned structural hints still deliver large gains. Not new models. Not more data. Just better scaffolding. ...

January 23, 2026 · 4 min · Zelina
Cover image

Your Agent Remembers—But Can It Forget?

Opening — Why this matters now As reinforcement learning (RL) systems inch closer to real-world deployment—robotics, autonomous navigation, decision automation—a quiet assumption keeps slipping through the cracks: that remembering is enough. Store the past, replay it when needed, act accordingly. Clean. Efficient. Wrong. The paper Memory Retention Is Not Enough to Master Memory Tasks in Reinforcement Learning dismantles this assumption with surgical precision. Its core claim is blunt: agents that merely retain information fail catastrophically once the world changes. Intelligence, it turns out, depends less on what you remember than on what you are able to forget. ...

January 22, 2026 · 4 min · Zelina