Attention with Doubt: Teaching Transformers When *Not* to Trust Themselves
Opening — Why this matters now Modern transformers are confident. Too confident. In high-stakes deployments—question answering, medical triage, compliance screening—this confidence routinely outruns correctness. The problem is not accuracy; it is miscalibration. Models say “I’m sure” when they shouldn’t. Most fixes arrive late in the pipeline: temperature scaling, Platt scaling, confidence rescaling after the model has already reasoned itself into a corner. What if uncertainty could intervene earlier—during reasoning rather than after the verdict? ...