Cover image

Darwin, But Make It Neural: When Networks Learn to Mutate Themselves

Opening — Why this matters now Modern AI has become very good at climbing hills—provided the hill stays put and remains differentiable. But as soon as the terrain shifts, gradients stumble. Controllers break. Policies freeze. Re-training becomes ritualistic rather than intelligent. This paper asks a quietly radical question: what if adaptation itself lived inside the network? Not as a scheduler, not as a meta-optimizer bolted on top, but as part of the neural machinery that gets inherited, mutated, and selected. ...

December 21, 2025 · 3 min · Zelina
Cover image

Scalpels Not Sledgehammers: A New Era of Precision Editing for LLMs

Most LLM editing approaches operate like sledgehammers—bluntly rewriting model weights and praying generalization holds. But a new method, Latent Knowledge Scalpel (LKS), dares to be surgical. Rather than changing the model itself, it targets how the model thinks—rewriting entity representations in the hidden layers, like swapping memories without touching the brain. From Entities to Knowledge Blocks The authors begin with a provocative observation: the internal representation (embedding) of an entity like “Alfred Nobel” doesn’t just encode a name, but a structured, meaningful knowledge block (KB). These latent vectors reflect factual associations like birthplace or occupation, and remarkably, they retain semantic and syntactic structures. For instance, swapping Nobel’s KB with that of “Shelley” shifts the model’s predicted birthplace from Sweden to England—even though the prompt wasn’t changed. ...

August 7, 2025 · 4 min · Zelina