When Models Forget on Purpose: The Economics of Memorization Control in LLMs
Opening — Why this matters now The current generation of large language models has an awkward habit: they remember too much, and not always the right things. In an era where proprietary data, copyrighted content, and sensitive information increasingly flow into training pipelines, memorization is no longer a technical footnote — it is a liability. ...