Cover image

Structure Matters: Externalities and the Hidden Logic of GNN Decisions

When explaining predictions made by Graph Neural Networks (GNNs), most methods ask: Which nodes or features mattered most? But what if this question misses the real driver of decisions — not the nodes themselves, but how they interact? That’s the bet behind GraphEXT, a novel explainability framework that reframes GNN attribution through the lens of externalities — a concept borrowed from economics. Developed by Wu, Hao, and Fan (2025), GraphEXT goes beyond traditional feature- or edge-based attributions. Instead, it models how structural interactions among nodes — the very thing GNNs are designed to exploit — influence predictions. ...

July 26, 2025 · 3 min · Zelina
Cover image

Good Bot, Bad Reward: Fixing Feedback Loops in Vision-Language Reasoning

1. A Student Who Cracked the Code — But Not the Meaning Imagine a student who aces every test by memorizing the positions of correct answers on multiple-choice sheets. He scores high, earns accolades, and passes every exam — but understands none of the material. His reward system is misaligned: success depends not on learning, but on exploiting test mechanics. Now, replace the student with an AI agent navigating a simulated room guided by language and images. This is the scenario that today’s leading research in Vision-and-Language Reinforcement Learning (RLVR) is grappling with. ...

June 13, 2025 · 5 min · Zelina