Cover image

Mirror, Signal, Manoeuvre: Why Privileged Self‑Access (Not Vibes) Defines AI Introspection

TL;DR Most demos of “LLM introspection” are actually vibe checks on outputs, not privileged access to internal state. If a third party with the same budget can do as well as the model “looking inward,” that’s not introspection—it’s ordinary evaluation. Two quick experiments show temperature self‑reports flip with trivial prompt changes and offer no edge over across‑model prediction. The bar for introspection should be higher, and business users should demand it. ...

August 23, 2025 · 5 min · Zelina
Cover image

Inside Out: How LLMs Are Learning to Feel (and Misfeel) Like Us

When Pixar’s Inside Out dramatized the mind as a control room of core emotions, it didn’t imagine that language models might soon build a similar architecture—on their own. But that’s exactly what a provocative new study suggests: large language models (LLMs), without explicit supervision, develop hierarchical structures of emotions that mirror human psychological models like Shaver’s emotion wheel. And the larger the model, the more nuanced its emotional understanding becomes. ...

July 16, 2025 · 4 min · Zelina
Cover image

Branching Out, Beating Down: Why Trees Still Outgrow Deep Roots in Quant AI

In the age of Transformers and neural nets that write poetry, it’s tempting to assume deep learning dominates every corner of AI. But in quantitative investing, the roots tell a different story. A recent paper—QuantBench: Benchmarking AI Methods for Quantitative Investment1—delivers a grounded reminder: tree-based models still outperform deep learning (DL) methods across key financial prediction tasks. ...

April 30, 2025 · 7 min