When Models Remember Too Much: Memorization Sinks in Large Language Models
Opening — Why this matters now Large Language Models are getting bigger, richer, and—quietly—better at remembering things they were never supposed to. Not reasoning. Not generalizing. Remembering. The paper behind this article introduces an uncomfortable but clarifying concept: memorization sinks. These are not bugs. They are structural attractors inside the training dynamics of LLMs—places where information goes in, but never really comes back out as generalizable knowledge. ...