Cover image

When the Paper Talks Back: Lost in Translation, Rejected by Design

Opening — Why this matters now Academic peer review is buckling under scale. ICML alone now processes close to ten thousand submissions a year. In response, the temptation to insert LLMs somewhere into the review pipeline—screening, triage, or scoring—is understandable. Efficiency, after all, is a persuasive argument. Unfortunately, efficiency is also how subtle failures scale. This paper asks an uncomfortable but necessary question: what happens when the paper being reviewed quietly talks back to the model reviewing it? Not loudly. Not visibly. Just enough to tip the scales. ...

December 31, 2025 · 4 min · Zelina
Cover image

MIRAGE-VC: Teaching LLMs to Think Like VCs (Without Drowning in Graphs)

Opening — Why this matters now Venture capital has always been a strange mix of narrative craft and network math. Partners talk about vision, conviction, and pattern recognition, but behind the scenes, outcomes are brutally skewed: most startups fail quietly, a few dominate returns, and almost everything depends on who backs whom, and in what order. ...

December 30, 2025 · 4 min · Zelina
Cover image

Regrets, Graphs, and the Price of Privacy: Federated Causal Discovery Grows Up

Opening — Why this matters now Federated learning promised a simple trade: keep data local, share intelligence globally. In practice, causal discovery in federated environments has been living off a polite fiction — that all clients live in the same causal universe. Hospitals, labs, or business units, we are told, differ only in sample size, not in how reality behaves. ...

December 30, 2025 · 4 min · Zelina
Cover image

Replay the Losses, Win the Game: When Failed Instructions Become Your Best Training Data

Opening — Why this matters now Reinforcement learning for large language models has a dirty secret: most of the time, nothing happens. When tasks demand perfect instruction adherence—formatting, style, length, logical constraints—the model either nails everything or gets a zero. Binary rewards feel principled, but in practice they starve learning. Aggregated rewards try to help, but they blur causality: different mistakes, same score, same gradient. The result is slow, noisy, and often misdirected optimization. ...

December 30, 2025 · 4 min · Zelina
Cover image

Think Wide, Then Think Hard: Forcing LLMs to Be Creative (On Purpose)

Opening — Why this matters now Large language models are prolific. Unfortunately, they are also boring in a very specific way. Give an LLM a constrained task—generate a programming problem, write a quiz, design an exercise—and it will reliably produce something correct, polite, and eerily similar to everything it has produced before. Change the temperature, swap the model, even rotate personas, and the output still clusters around the same conceptual center. ...

December 30, 2025 · 4 min · Zelina
Cover image

Many Minds, One Decision: Why Agentic AI Needs a Brain, Not Just Nerves

Opening — Why this matters now Agentic AI has officially crossed the line from clever demo to operational liability. We are no longer talking about chatbots that occasionally hallucinate trivia. We are deploying autonomous systems that decide, act, and trigger downstream consequences—often across tools, APIs, and real-world processes. In that setting, the old comfort blanket of “the model said so” is no longer defensible. ...

December 29, 2025 · 3 min · Zelina
Cover image

OrchestRA and the End of Linear Drug Discovery

Opening — Why this matters now Drug discovery has a reputation problem. It is slow, expensive, and structurally brittle. Despite exponential growth in biomedical data and modeling tools, R&D productivity has declined for decades. The core reason is not lack of intelligence — human or artificial — but fragmentation. Biology, chemistry, and pharmacology still operate like loosely coupled departments passing half-finished work downstream. ...

December 29, 2025 · 3 min · Zelina
Cover image

Pruning Is a Game, and Most Weights Lose

Opening — Why this matters now Neural network pruning has always suffered from a mild identity crisis. We know how to prune—rank weights, cut the weakest, fine-tune the survivors—but we’ve been far less confident about why pruning works at all. The dominant narrative treats sparsity as a punishment imposed from outside: an auditor with a spreadsheet deciding which parameters deserve to live. ...

December 29, 2025 · 4 min · Zelina
Cover image

SAGA, Not Sci‑Fi: When LLMs Start Doing Science

Opening — Why this matters now For years, we have asked large language models to explain science. The paper behind SAGA asks a more uncomfortable question: what happens when we ask them to do science instead? Scientific discovery has always been bottlenecked not by ideas, but by coordination — between hypothesis generation, experiment design, evaluation, and iteration. SAGA reframes this entire loop as an agentic system problem. Not a chatbot. Not a single model. A laboratory of cooperating AI agents. ...

December 29, 2025 · 3 min · Zelina
Cover image

SpatialBench: When AI Meets Messy Biology

Opening — Why this matters now AI agents are having a good year. They write code, refactor repositories, debug production bugs, and occasionally embarrass junior developers. Naturally, biology is next. Spatial transcriptomics—arguably one of the messiest, most insight-rich data domains in modern life science—looks like a perfect proving ground. If agents can reason over spatial biology data, the promise is compelling: fewer bottlenecks, faster discovery, and less dependence on scarce bioinformatics talent. ...

December 29, 2025 · 5 min · Zelina