Cover image

The Sentiment Edge: How FinDPO Trains LLMs to Think Like Traders

Financial markets don’t reward the loudest opinions. They reward the most timely, well-calibrated ones. FinDPO, a new framework by researchers from Imperial College London, takes this lesson seriously. It proposes a bold shift in how we train language models to read market sentiment. Rather than relying on traditional supervised fine-tuning (SFT), FinDPO uses Direct Preference Optimization (DPO) to align a large language model with how a human trader might weigh sentiment signals in context. And the results are not just academic — they translate into real money. ...

July 27, 2025 · 3 min · Zelina
Cover image

Trading on Memory: Why Markov Models Miss the Signal

Classic finance assumes that the past doesn’t matter — only the present state of the market matters for decisions. But in a new paper from researchers at Imperial College and Oxford, a kernel-based framework for trading strategy design exposes how this assumption leads to suboptimal choices. Their insight: memory matters, and modern tools can finally make use of it. ...

July 20, 2025 · 3 min · Zelina
Cover image

Residual Learning: How Reinforcement Learning Is Speeding Up Portfolio Math

What if the hardest part of finance isn’t prediction, but precision? Behind every real-time portfolio adjustment or split-second options quote lies a giant math problem: solving Ax = b, where A is large, sparse, and often very poorly behaved. In traditional finance pipelines, iterative solvers like GMRES or its flexible cousin FGMRES are tasked with solving these linear systems — be it from a Markowitz portfolio optimization or a discretized Black–Scholes PDE for option pricing. But when the matrix A is ill-conditioned (which it often is), convergence slows to a crawl. Preconditioning helps, but tuning these parameters is more art than science — until now. ...

July 6, 2025 · 3 min · Zelina
Cover image

Wall Street’s New Intern: How LLMs Are Redefining Financial Intelligence

The financial industry has always prided itself on cold precision. For decades, quantitative models and spreadsheets dominated boardrooms and trading desks. But that orthodoxy is now under siege. Not from another statistical breakthrough, but from something surprisingly human-like: Large Language Models (LLMs). Recent research shows a dramatic shift in how AI—particularly LLMs like GPT-4 and LLaMA—is being integrated across financial workflows. Far from just summarizing news or answering earnings call questions, LLMs are now organizing entire investment pipelines, fine-tuning themselves on proprietary data, and even collaborating as autonomous financial agents. A recent survey by Mahdavi et al. (2025) categorized over 70 state-of-the-art systems into four distinct architectural frameworks, offering us a lens through which to assess the future of financial AI. ...

July 4, 2025 · 4 min · Zelina
Cover image

Sharpe Thinking: How Neural Nets Redraw the Frontier of Portfolio Optimization

The search for the elusive optimal portfolio has always been a balancing act between signal and noise. Covariance matrices, central to risk estimation, are notoriously fragile in high dimensions. Classical fixes like shrinkage, spectral filtering, or factor models have all offered partial answers. But a new paper by Bongiorno, Manolakis, and Mantegna proposes something different: a rotation-invariant, end-to-end neural network that learns the inverse covariance matrix directly from historical returns — and does so better than the best analytical techniques, even under realistic trading constraints. ...

July 3, 2025 · 5 min · Zelina
Cover image

How Ultra-Large Context Windows Challenge RAG

Gemini 2.5 and the Rise of the 2 Million Token Era In March 2025, Google introduced Gemini 2.5 Pro with a 2 million token context window, marking a major milestone in the capabilities of language models. While this remains an experimental and high-cost frontier, it opens the door to new possibilities. To put this in perspective (approximate values, depending on tokenizer): 📖 The entire King James Bible: ~785,000 tokens 🎭 All of Shakespeare’s plays: ~900,000 tokens 📚 A full college textbook: ~500,000–800,000 tokens This means Gemini 2.5 could, in theory, process multiple entire books or large document repositories in one go—though with substantial compute and memory costs that make practical deployment currently limited. ...

March 29, 2025 · 3 min · Cognaptus Insights