Cover image

Dial M—for Markets: Brain‑Scanning and Steering LLMs for Finance

TL;DR A new paper shows how to insert a sparse, interpretable layer into an LLM to expose plain‑English concepts (e.g., sentiment, risk, timing) and steer them like dials without retraining. In finance news prediction, these interpretable features outperform final‑layer embeddings and reveal that sentiment, market/technical cues, and timing drive most short‑horizon alpha. Steering also debiases optimism, lifting Sharpe by nudging the model negative on sentiment. Why this matters (and what’s new) Finance teams have loved LLMs’ throughput but hated their opacity. This paper demonstrates a lightweight path to transparent performance: ...

September 1, 2025 · 4 min · Zelina
Cover image

How Sparse is Your Thought? Cracking the Inner Logic of Chain-of-Thought Prompts

Chain-of-Thought (CoT) prompting has become a go-to technique for improving multi-step reasoning in large language models (LLMs). But is it really helping models think better—or just encouraging them to bluff more convincingly? A new paper from Leiden University, “How does Chain of Thought Think?”, delivers a mechanistic deep dive into this question. By combining sparse autoencoders (SAEs) with activation patching, the authors dissect whether CoT actually changes what a model internally computes—or merely helps its outputs look better. ...

August 1, 2025 · 3 min · Zelina