Cover image

Inside Out: How LLMs Are Learning to Feel (and Misfeel) Like Us

When Pixar’s Inside Out dramatized the mind as a control room of core emotions, it didn’t imagine that language models might soon build a similar architecture—on their own. But that’s exactly what a provocative new study suggests: large language models (LLMs), without explicit supervision, develop hierarchical structures of emotions that mirror human psychological models like Shaver’s emotion wheel. And the larger the model, the more nuanced its emotional understanding becomes. ...

July 16, 2025 · 4 min · Zelina
Cover image

Anchored Thinking: Mapping the Inner Compass of Reasoning LLMs

In the world of large language models (LLMs), answers often emerge from an intricate internal dialogue. But what if we could locate the few sentences within that stream of thoughts that disproportionately steer the outcome—like anchors stabilizing a drifting ship? That’s exactly what Paul Bogdan, Uzay Macar, Neel Nanda, and Arthur Conmy aim to do in their new work, “Thought Anchors: Which LLM Reasoning Steps Matter?”. This study presents an ambitious trifecta of methods to trace the true influencers of LLM reasoning. ...

June 25, 2025 · 3 min · Zelina