Stuck on Repeat: Why LLMs Reinforce Their Own Bad Ideas
Opening — Why This Matters Now Large language models now behave like overeager junior analysts: they think harder, write longer, and try very hard to sound more certain than they should. Iterative reasoning techniques—Chain-of-Thought, Debate, and the new wave of inference-time scaling—promise deeper logic and better truth-seeking. Yet the empirical reality is more awkward: the more these models “reason,” the more they entrench their initial assumptions. The result is polished but stubborn outputs that deviate from Bayesian rationality. ...