
Structure Matters: Externalities and the Hidden Logic of GNN Decisions
When explaining predictions made by Graph Neural Networks (GNNs), most methods ask: Which nodes or features mattered most? But what if this question misses the real driver of decisions — not the nodes themselves, but how they interact? That’s the bet behind GraphEXT, a novel explainability framework that reframes GNN attribution through the lens of externalities — a concept borrowed from economics. Developed by Wu, Hao, and Fan (2025), GraphEXT goes beyond traditional feature- or edge-based attributions. Instead, it models how structural interactions among nodes — the very thing GNNs are designed to exploit — influence predictions. ...