Anchors Aweigh? Why Small LLMs Refuse to Flip Their Own Semantics
Opening — Why This Matters Now Every executive wants LLMs that are obedient, flexible, and capable of doing whatever the prompt says. Reality, unfortunately, is less compliant. A provocative new study (Kumar, 2025) shows that small-to-mid‑scale LLMs (1–12B parameters) simply refuse to overwrite certain pre‑trained semantic meanings — even when demonstrations explicitly tell them to. ...