Black Boxes, White Coats: AI Epidemiology and the Art of Governing Without Understanding
Opening — Why this matters now We keep insisting that powerful AI systems must be understood before they can be trusted. That demand feels intuitively correct—and practically paralysing. Large language models now operate in medicine, finance, law, and public administration. Yet interpretability tools—SHAP, LIME, mechanistic circuit tracing—remain brittle, expensive, and increasingly disconnected from real-world deployment. The gap between how models actually behave and how we attempt to explain them is widening, not closing. ...