Cover image

The Problem with Problems: Why LLMs Still Don’t Know What’s Interesting

Opening — Why this matters now In an age when AI can outscore most humans in the International Mathematical Olympiad, a subtler question has emerged: can machines care about what they solve? The new study A Matter of Interest (Mishra et al., 2025) explores this psychological fault line—between mechanical brilliance and genuine curiosity. If future AI partners are to co‑invent mathematics, not just compute it, they must first learn what humans deem worth inventing. ...

November 12, 2025 · 4 min · Zelina
Cover image

Inside Out: How LLMs Are Learning to Feel (and Misfeel) Like Us

When Pixar’s Inside Out dramatized the mind as a control room of core emotions, it didn’t imagine that language models might soon build a similar architecture—on their own. But that’s exactly what a provocative new study suggests: large language models (LLMs), without explicit supervision, develop hierarchical structures of emotions that mirror human psychological models like Shaver’s emotion wheel. And the larger the model, the more nuanced its emotional understanding becomes. ...

July 16, 2025 · 4 min · Zelina