Cover image

More Isn’t Smarter: Why Agent Diversity Beats Agent Count

Opening — Why this matters now Multi-agent LLM systems have quietly become the industry’s favorite way to brute-force intelligence. When one model struggles, the instinct is simple: add more agents. Vote harder. Debate longer. Spend more tokens. And yet, performance curves keep telling the same unflattering story: early gains, fast saturation, wasted compute. This paper asks the uncomfortable question most agent frameworks politely ignore: why does scaling stall so quickly—and what actually moves the needle once it does? The answer, it turns out, has less to do with how many agents you run, and more to do with how different they truly are. ...

February 4, 2026 · 4 min · Zelina
Cover image

Auditing the Illusion of Forgetting: When Unlearning Isn’t Enough

Opening — Why this matters now “Right to be forgotten” has quietly become one of the most dangerous phrases in AI governance. On paper, it sounds clean: remove a user’s data, comply with regulation, move on. In practice, modern large language models (LLMs) have turned forgetting into a performance art. Models stop saying what they were trained on—but continue remembering it internally. ...

January 22, 2026 · 4 min · Zelina
Cover image

MI-ZO: Teaching Vision-Language Models Where to Look

Opening — Why this matters now Vision-Language Models (VLMs) are everywhere—judging images, narrating videos, and increasingly acting as reasoning engines layered atop perception. But there is a quiet embarrassment in the room: most state-of-the-art VLMs are trained almost entirely on 2D data, then expected to reason about 3D worlds as if depth, occlusion, and viewpoint were minor details. ...

January 2, 2026 · 4 min · Zelina