Cover image

The Sink That Remembers: Solving LLM Memorization Without Forgetting Everything Else

When large language models (LLMs) memorize repeated content during training—be it a phone number, a copyrighted paragraph, or a user’s personal story—the implications go beyond benign repetition. They touch the very core of AI safety, privacy, and trust. And yet, removing this memorized content after training has proven to be a devil’s bargain: anything you subtract tends to weaken the model’s overall capabilities. In their recent ICML 2025 paper, Ghosal et al. propose an elegant reframing of this problem. Rather than performing painful post-hoc surgery on a trained model, they suggest we prepare the model from the outset to isolate memorization into removable compartments—which they call Memorization Sinks (MemSinks). ...

July 15, 2025 · 4 min · Zelina