GAVEL: When AI Safety Grows a Rulebook
Opening — Why this matters now AI safety is drifting toward an uncomfortable paradox. The more capable large language models become, the less transparent their internal decision-making appears — and the more brittle our existing safeguards feel. Text-based moderation catches what models say, not what they are doing. Activation-based safety promised to fix this, but in practice it has inherited many of the same flaws: coarse labels, opaque triggers, and painful retraining cycles. ...