From Features to Actions: Why Agentic AI Needs a New Explainability Playbook
Opening — Why this matters now Explainable AI has always promised clarity. For years, that promise was delivered—at least partially—through feature attributions, saliency maps, and tidy bar charts explaining why a model predicted this instead of that. Then AI stopped predicting and started acting. Tool-using agents now book flights, browse the web, recover from errors, and occasionally fail in slow, complicated, deeply inconvenient ways. When that happens, nobody asks which token mattered most. They ask: where did the agent go wrong—and how did it get there? ...