Cover image

Delegating to the Almost-Aligned: When Misaligned AI Is Still the Rational Choice

Opening — Why this matters now The AI alignment debate has a familiar rhythm: align the values first, deploy later. Sensible, reassuring—and increasingly detached from reality. In practice, we are already delegating consequential decisions to systems we do not fully understand, let alone perfectly align. Trading algorithms rebalance portfolios, recommendation engines steer attention, and autonomous agents negotiate, schedule, and filter on our behalf. The real question is no longer “Is the AI aligned?” but “Is it aligned enough to justify delegation, given what it can do better than us?” ...

December 18, 2025 · 4 min · Zelina
Cover image

Long Thoughts, Short Bills: Distilling Mathematical Reasoning at Scale

Opening — Why this matters now Large language models can solve math problems. The more interesting question in 2025 is whether they can learn how to reason, at scale, across contexts that are long, messy, and computationally expensive. Most math datasets answer the first question. Nemotron-Math answers the second — and does so with a surprisingly pragmatic eye on cost. ...

December 18, 2025 · 4 min · Zelina
Cover image

Mind-Reading Without Telepathy: Predictive Concept Decoders

Opening — Why this matters now For years, AI interpretability has promised transparency while quietly delivering annotations, probes, and post-hoc stories that feel explanatory but often fail the only test that matters: can they predict what the model will actually do next? As large language models become agents—capable of long-horizon planning, policy evasion, and strategic compliance—interpretability that merely describes activations after the fact is no longer enough. What we need instead is interpretability that anticipates behavior. That is the ambition behind Predictive Concept Decoders (PCDs). ...

December 18, 2025 · 5 min · Zelina
Cover image

Stepwise Think-Critique: Teaching LLMs to Doubt Themselves (Productively)

Opening — Why this matters now Large Language Models have learned how to think out loud. What they still struggle with is knowing when that thinking is wrong — while it is happening. In high‑stakes domains like mathematics, finance, or policy automation, delayed error detection is not a feature; it is a liability. Most modern reasoning pipelines still follow an awkward split: first generate reasoning, then verify it — often with a separate model. Humans do not work this way. We reason and judge simultaneously. This paper asks a simple but uncomfortable question: what if LLMs were trained to do the same? ...

December 18, 2025 · 4 min · Zelina
Cover image

When Tokens Remember: Graphing the Ghosts in LLM Reasoning

Opening — Why this matters now Large language models don’t think—but they do accumulate influence. And that accumulation is exactly where most explainability methods quietly give up. As LLMs move from single-shot text generators into multi-step reasoners, agents, and decision-making systems, we increasingly care why an answer emerged—not just what token attended to what prompt word. Yet most attribution tools still behave as if each generation step lives in isolation. That assumption is no longer just naïve; it is actively misleading. ...

December 18, 2025 · 4 min · Zelina
Cover image

Model First, Think Later: Why LLMs Fail Before They Reason

Opening — Why this matters now As LLM agents graduate from clever chatbots to decision‑making systems, their failures are becoming less amusing and more expensive. We are no longer talking about wrong trivia answers; we are talking about broken schedules, invalid plans, unsafe workflows, and agents confidently violating constraints they were never told—explicitly—not to break. ...

December 17, 2025 · 4 min · Zelina
Cover image

Reasoning Loops, Not Bigger Brains

Opening — Why this matters now For the past two years, AI progress has been narrated as a story of scale: more parameters, more data, more compute. Yet the ARC-AGI leaderboard keeps delivering an inconvenient counterexample. Small, scratch-trained models—no web-scale pretraining, no trillion-token diet—are routinely humiliating far larger systems on abstract reasoning tasks. This paper asks the uncomfortable question: where is the reasoning actually coming from? ...

December 17, 2025 · 3 min · Zelina
Cover image

When Attention Learns to Breathe: Sparse Transformers for Sustainable Medical AI

Opening — Why this matters now Healthcare AI has quietly run into a contradiction. We want models that are richer—multi-modal, context-aware, clinically nuanced—yet we increasingly deploy them in environments that are poorer: fewer samples, missing modalities, limited compute, and growing scrutiny over energy use. Transformers, the industry’s favorite hammer, are powerful but notoriously wasteful. In medicine, that waste is no longer academic; it is operational. ...

December 17, 2025 · 4 min · Zelina
Cover image

When Medical AI Stops Guessing and Starts Asking

Opening — Why this matters now Medical AI has become very good at answering questions. Unfortunately, medicine rarely works that way. Pathology, oncology, and clinical decision-making are not single-query problems. They are investigative processes: observe, hypothesize, cross-check, revise, and only then conclude. Yet most medical AI benchmarks still reward models for producing one-shot answers — neat, confident, and often misleading. This mismatch is no longer academic. As multimodal models edge closer to clinical workflows, the cost of shallow reasoning becomes operational, regulatory, and ethical. ...

December 16, 2025 · 4 min · Zelina
Cover image

When Precedent Gets Nuanced: Why Legal AI Needs Dimensions, Not Just Factors

Opening — Why this matters now Legal AI has a habit of oversimplifying judgment. In the race to automate legal reasoning, we have learned how to encode rules, then factors, and eventually hierarchies of factors. But something stubborn keeps leaking through the abstractions: strength. Not whether a reason exists — but how strongly it exists. ...

December 16, 2025 · 4 min · Zelina