When Models Learn to Forget on Purpose
Opening — Why this matters now Large language models are getting uncomfortably good at remembering things they were never supposed to remember. Training data leaks, verbatim recall, copyright disputes, and privacy risks are no longer edge cases—they are board-level concerns. The paper you just made me read tackles this problem head-on, not by adding more guardrails at inference time, but by questioning a more heretical idea: what if models should be trained to forget? ...