Cover image

Words + Returns: Teaching Embeddings to Invest in Themes

How do you turn a fuzzy idea like “AI + chips” into a living, breathing portfolio that adapts as markets move? A new framework called THEME proposes a crisp answer: train stock embeddings that understand both the meaning of a theme and the momentum around it, then retrieve candidates that are simultaneously on‑theme and investment‑suitable. Unlike static ETF lists or naive keyword screens, THEME learns a domain‑tuned embedding space in two steps: first, align companies to the language of themes; second, nudge those semantics with a lightweight temporal adapter that “listens” to recent returns. The result is a retrieval engine that feeds a dynamic portfolio constructor—and in backtests, it beats strong LLM/embedding baselines and even average thematic ETFs on risk‑adjusted returns. ...

August 26, 2025 · 5 min · Zelina
Cover image

Three’s Company: When LLMs Argue Their Way to Alpha

TL;DR A role‑based, debate‑driven LLM system—AlphaAgents—coordinates three specialist agents (fundamental, sentiment, valuation) to screen equities, reach consensus, and build a simple, equal‑weight portfolio. In a four‑month backtest starting 2024‑02‑01 on 15 tech names, the risk‑neutral multi‑agent portfolio outperformed the benchmark and single‑agent baselines; risk‑averse variants underperformed in a bull run (as expected). The real innovation isn’t the short backtest—it’s the explainable process: constrained tools per role, structured debate, and explicit risk‑tolerance prompts. ...

August 18, 2025 · 5 min · Zelina
Cover image

The Sentiment Edge: How FinDPO Trains LLMs to Think Like Traders

Financial markets don’t reward the loudest opinions. They reward the most timely, well-calibrated ones. FinDPO, a new framework by researchers from Imperial College London, takes this lesson seriously. It proposes a bold shift in how we train language models to read market sentiment. Rather than relying on traditional supervised fine-tuning (SFT), FinDPO uses Direct Preference Optimization (DPO) to align a large language model with how a human trader might weigh sentiment signals in context. And the results are not just academic — they translate into real money. ...

July 27, 2025 · 3 min · Zelina