Think Before You Sink: Streaming Hallucinations in Long Reasoning
Opening — Why this matters now Large language models have learned to think out loud. Chain-of-thought (CoT) reasoning has become the default solution for math, planning, and multi-step decision tasks. The industry applauded: more transparency, better answers, apparent interpretability. Then reality intervened. Despite elegant reasoning traces, models still reach incorrect conclusions—sometimes confidently, sometimes catastrophically. Worse, the mistakes are no longer obvious. They creep in quietly, spread across steps, and survive superficial self-corrections. What we call “hallucination” has grown up. And our detection methods have not. ...