FAME or Fortune? How Formal Explanations Finally Scale to Real Neural Networks
Opening — Why this matters now For years, the promise of explainable AI has been slightly aspirational. We can ask neural networks what they predict, but asking why they made that decision often leads to a collection of saliency maps, heuristics, and educated guesses. Useful? Yes. Reliable enough for safety‑critical systems? Not quite. In industries like aviation, finance, or healthcare, explanations must come with guarantees—not visual metaphors. Regulators increasingly expect traceability and reasoning that can be verified rather than merely interpreted. ...