Cover image

Boxed In, Cashed Out: Deep Gradient Flows for Fast American Option Pricing

Pricing American options has long been the Achilles’ heel of quantitative finance, particularly in high dimensions. Unlike European options, American-style derivatives introduce a free-boundary problem due to their early exercise feature, making analytical solutions elusive and most numerical methods inefficient beyond two or three assets. But a recent paper by Jasper Rou introduces a promising technique — the Time Deep Gradient Flow (TDGF) — that sidesteps several of these barriers with a fresh take on deep learning design, optimization, and sampling. ...

July 27, 2025 · 4 min · Zelina
Cover image

Divide, Route, and Conquer: DriftMoE's Smart Take on Concept Drift

Concept drift is the curse of the real world. Models trained on yesterday’s data go stale in hours, sometimes minutes. Traditional remedies like Adaptive Random Forests (ARF) respond reactively, detecting change and resetting trees. But what if the system could instead continuously learn where to look, dynamically routing each new sample to the right expert — no drift detector required? That’s exactly the ambition behind DriftMoE, a Mixture-of-Experts framework purpose-built for online learning in non-stationary environments. Co-developed by researchers at Ireland’s CeADAR, this architecture marries lightweight neural routing with classic Hoeffding trees, achieving expert specialization as a byproduct of learning — not as a bolted-on correction. ...

July 27, 2025 · 3 min · Zelina
Cover image

Factor Factory: How LLMs Are Reinventing Sparse Portfolio Optimization

In quantitative finance, sparse portfolio optimization is a famously unforgiving problem. Selecting the top m assets from a universe of n under budget and risk constraints is NP-hard, highly sensitive to hyperparameters, and often brittle in volatile markets. Traditional solutions—from greedy algorithms to convex relaxations—either crumble under market shifts or produce opaque, overfitted outputs. But what if we reframed the problem entirely? Enter EFS (Evolutionary Factor Search), a radical new framework that turns sparse portfolio construction into an LLM-guided ranking game. Instead of laboriously tuning machine learning models or relying on rigid heuristics, EFS lets large language models generate, evolve, and select alpha factors—and it does so in a way that is not just automated, but interpretable, adaptive, and surprisingly effective. ...

July 27, 2025 · 3 min · Zelina
Cover image

From Sobol to Sinkhorn: A Transport Revolution in Sensitivity Analysis

In a world where climate models span continents and economic simulators evolve across decades, it’s no longer enough to ask which variable affects the output the most. We must now ask: how does each input reshape the entire output distribution? The R package gsaot brings a mathematically rigorous answer, harnessing the power of Optimal Transport (OT) to provide a fresh take on sensitivity analysis. ...

July 27, 2025 · 3 min · Zelina
Cover image

One Model to Train Them All: How OmniTrain Rethinks Open-Vocabulary Detection

Open-vocabulary object detection — the holy grail of AI systems that can recognize anything in the wild — has been plagued by fragmented training strategies. Models like OWL-ViT and Grounding DINO stitch together multiple learning objectives across different stages. This Frankensteinian complexity not only slows progress, but also creates systems that are brittle, compute-hungry, and hard to scale. Enter OmniTrain: a refreshingly elegant, end-to-end training recipe that unifies detection, grounding, and image-text alignment into a single pass. No pretraining-finetuning sandwich. No separate heads. Just a streamlined pipeline that can scale to hundreds of thousands of concepts — and outperform specialized systems while doing so. ...

July 27, 2025 · 3 min · Zelina
Cover image

Speed Bumps and Swells: Rethinking Optimal Trading with Stochastic Volatility

When markets move, they do so with both sudden shocks and slow drifts. Yet for years, much of optimal trading theory has treated volatility as if it were static—a constant backdrop rather than a dynamic participant in the game. The recent paper by Chan, Sircar, and Zimbidis decisively challenges that assumption by embedding multiscale stochastic volatility into a classical dynamic trading model. The result? A more nuanced, volatility-aware framework that adapts trading speed and target positions based on the fast and slow undulations of risk. ...

July 27, 2025 · 4 min · Zelina
Cover image

Stacking Alpha: How HARLF's Three-Tier Reinforcement Learner Beats the Market

The idea of merging language models and financial algorithms isn’t new — but HARLF takes it a step further by embedding them in a hierarchical reinforcement learning (HRL) framework that actually delivers. With a stunning 26% annualized ROI and a Sharpe ratio of 1.2, this isn’t just another LLM-meets-finance paper. It’s a blueprint for how sentiment and structure can be synergistically harnessed. From FinBERT to Fortune: Integrating Text with Tickers Most financial LLM pipelines stop at score generation: classify sentiment and call it a signal. But HARLF builds a full sentiment pipeline using FinBERT, generating monthly sentiment scores from scraped Google News articles for each of 14 assets. These scores aren’t just inputs — they form a complete observation vector that includes: ...

July 27, 2025 · 3 min · Zelina
Cover image

The Sentiment Edge: How FinDPO Trains LLMs to Think Like Traders

Financial markets don’t reward the loudest opinions. They reward the most timely, well-calibrated ones. FinDPO, a new framework by researchers from Imperial College London, takes this lesson seriously. It proposes a bold shift in how we train language models to read market sentiment. Rather than relying on traditional supervised fine-tuning (SFT), FinDPO uses Direct Preference Optimization (DPO) to align a large language model with how a human trader might weigh sentiment signals in context. And the results are not just academic — they translate into real money. ...

July 27, 2025 · 3 min · Zelina
Cover image

When Learning Goes Rogue: Fixing RL Biases in Economic Simulations

Reinforcement Learning (RL) has become a seductive tool for economists seeking to simulate adaptive behavior in dynamic, uncertain environments. But when it comes to modeling firms in equilibrium labor markets, this computational marriage reveals some serious incompatibilities. In a recent paper, Zhang and Chen expose two critical mismatches that emerge when standard RL is naively applied to simulate economic models — and offer a principled fix that merges the best of RL and economic theory. ...

July 27, 2025 · 4 min · Zelina
Cover image

Can You Spot the Bot? Why Detectability, Not Deception, Is the New AI Frontier

In an age where generative models can ace SATs, write novels, and mimic empathy, it’s no longer enough to ask, “Can an AI fool us?” The better question is: Can we still detect it when it does? That’s the premise behind the Dual Turing Test, a sharp reframing of the classic imitation game. Rather than rewarding AI for successfully pretending to be human, this framework challenges judges to reliably detect AI—even when its responses meet strict quality standards. ...

July 26, 2025 · 4 min · Zelina