Cover image

Photon or Not: When AI Learns to See in 3D Without Burning Your GPU

Opening — Why this matters now There is a quiet paradox in modern AI: the models that see the most… understand the least efficiently. Nowhere is this more obvious than in medical imaging. CT and MRI scans are inherently 3D, dense, and unforgiving. Feed them into large multimodal models, and you either compress reality—or exhaust your GPU budget trying not to. ...

March 29, 2026 · 4 min · Zelina
Cover image

Poisoned Answers, Polished Pipelines: When RAG Learns to Lie on Cue

Opening — Why this matters now Retrieval-Augmented Generation (RAG) was supposed to fix the most embarrassing flaw of large language models: confident nonsense. Give the model access to fresh data, ground its answers in reality, and suddenly hallucinations become… manageable. Unfortunately, reality is also writable. As enterprises rush to deploy RAG systems—customer support copilots, internal knowledge assistants, financial research tools—they are quietly expanding their attack surface. Not just the model, but the data pipeline. Not just prompts, but retrieval. ...

March 29, 2026 · 4 min · Zelina
Cover image

The Latent Cost of Thinking: When LLM Reasoning Becomes a Liability

Opening — Why this matters now The AI industry has developed a curious obsession: making models “think harder.” Chain-of-thought prompting, reasoning traces, multi-step planning—these are now treated as hallmarks of intelligence. Benchmarks reward it. Researchers optimize for it. Startups sell it. But here’s the inconvenient question: what if more thinking doesn’t always mean better outcomes? ...

March 29, 2026 · 4 min · Zelina
Cover image

The Model That Forgot Itself: Why LLMs Drift Without Knowing

Opening — Why this matters now We’ve spent the last two years obsessing over whether AI says the right thing. A more uncomfortable question is emerging: does it even believe what it says? As enterprises move from chatbots to agentic systems, the requirement shifts from correctness to consistency over time. A trading agent, a compliance assistant, or a workflow orchestrator cannot quietly change its objective mid-process. Humans call that unreliability. In finance, we call it risk. ...

March 29, 2026 · 5 min · Zelina
Cover image

When Models Remember Too Much: The Hidden Economy of Memorization in LLM Training

Opening — Why this matters now Large language models have an uncomfortable habit: they remember things they were never explicitly asked to remember. Not in the polite, human sense of “learning patterns,” but in the more literal sense of memorizing chunks of training data. For years, this was treated as a side effect—occasionally embarrassing, sometimes risky, but mostly tolerated. Now it’s becoming economically relevant. Training costs are rising, data pipelines are bloated, and enterprises are quietly asking a sharper question: ...

March 29, 2026 · 4 min · Zelina
Cover image

ARC-AGI-3 — When AI Stops Guessing and Starts Thinking

Opening — Why this matters now For the past two years, the AI narrative has been deceptively simple: models are getting better, reasoning is improving, and agents are just around the corner. Then comes ARC-AGI-3 — and quietly dismantles that optimism. Despite dramatic advances in large reasoning models (LRMs), frontier systems score below 1%, while humans solve 100% of tasks on first exposure fileciteturn0file0. Not worse. Not slightly behind. Orders of magnitude off. ...

March 28, 2026 · 4 min · Zelina
Cover image

Drive My Way: When Autonomous Cars Start Having Personalities

Opening — Why this matters now Autonomous driving has quietly solved the easy problem. Vehicles can already perceive, plan, and act with increasing reliability. The industry’s remaining challenge is more uncomfortable: humans don’t want the same driver. Some prefer cautious, almost apologetic braking. Others want assertive lane changes that shave minutes off a commute. The current generation of systems—neatly packaged into “eco,” “comfort,” or “sport”—pretends this spectrum is discrete. It isn’t. ...

March 28, 2026 · 5 min · Zelina
Cover image

Driving by Words: When LLMs Take the Wheel (Literally)

Opening — Why this matters now Autonomous driving has spent the last decade mastering one thing: imitation. Observe human drivers, learn their behavior, replicate it at scale. It works—until it doesn’t. Because imitation, by definition, cannot handle intent. The next frontier isn’t just driving well. It’s driving on command. Recent advances in vision-language-action (VLA) models suggest that cars can now “understand” instructions like “overtake the car ahead before the light turns red”. But most systems still treat language as commentary—not control. ...

March 28, 2026 · 5 min · Zelina
Cover image

Harnessing the Harness: When AI Stops Being a Model Problem

Opening — Why this matters now For the past two years, the AI industry has been obsessed with a single lever: better models. Bigger context windows, more parameters, smarter reasoning. The implicit belief was simple—upgrade the model, and everything else improves. That assumption is quietly breaking. Recent evidence suggests that two systems using the same foundation model can produce wildly different outcomes depending on how they are orchestrated. Not prompted. Not fine-tuned. Orchestrated. ...

March 28, 2026 · 5 min · Zelina
Cover image

Packing Memory, Not Problems: How Short Clips Teach AI to Think Long in Video

Opening — Why this matters now The industry has quietly hit a wall. Short-form video generation? Impressive. Five seconds of cinematic motion? Routine. But ask today’s models for two minutes of coherent storytelling, and things begin to unravel—literally. Characters mutate, scenes drift, and memory explodes. The problem isn’t creativity. It’s memory economics. Modern video models don’t fail because they lack intelligence. They fail because they cannot afford to remember. And like most systems under memory pressure, they start making poor decisions. ...

March 28, 2026 · 5 min · Zelina