Cover image

When Agents Loop: Geometry, Drift, and the Hidden Physics of LLM Behavior

Opening — Why this matters now Agentic AI systems are everywhere—self-refining copilots, multi-step reasoning chains, autonomous research bots quietly talking to themselves. Yet beneath the productivity demos lurks an unanswered question: what actually happens when an LLM talks to itself repeatedly? Does meaning stabilize, or does it slowly dissolve into semantic noise? The paper “Dynamics of Agentic Loops in Large Language Models” offers an unusually rigorous answer. Instead of hand-waving about “drift” or “stability,” it treats agentic loops as discrete dynamical systems and analyzes them geometrically in embedding space. The result is less sci‑fi mysticism, more applied mathematics—and that’s a compliment fileciteturn0file0. ...

December 14, 2025 · 4 min · Zelina
Cover image

Words + Returns: Teaching Embeddings to Invest in Themes

How do you turn a fuzzy idea like “AI + chips” into a living, breathing portfolio that adapts as markets move? A new framework called THEME proposes a crisp answer: train stock embeddings that understand both the meaning of a theme and the momentum around it, then retrieve candidates that are simultaneously on‑theme and investment‑suitable. Unlike static ETF lists or naive keyword screens, THEME learns a domain‑tuned embedding space in two steps: first, align companies to the language of themes; second, nudge those semantics with a lightweight temporal adapter that “listens” to recent returns. The result is a retrieval engine that feeds a dynamic portfolio constructor—and in backtests, it beats strong LLM/embedding baselines and even average thematic ETFs on risk‑adjusted returns. ...

August 26, 2025 · 5 min · Zelina