Cover image

From Sobol to Sinkhorn: A Transport Revolution in Sensitivity Analysis

In a world where climate models span continents and economic simulators evolve across decades, it’s no longer enough to ask which variable affects the output the most. We must now ask: how does each input reshape the entire output distribution? The R package gsaot brings a mathematically rigorous answer, harnessing the power of Optimal Transport (OT) to provide a fresh take on sensitivity analysis. ...

July 27, 2025 · 3 min · Zelina
Cover image

One Model to Train Them All: How OmniTrain Rethinks Open-Vocabulary Detection

Open-vocabulary object detection — the holy grail of AI systems that can recognize anything in the wild — has been plagued by fragmented training strategies. Models like OWL-ViT and Grounding DINO stitch together multiple learning objectives across different stages. This Frankensteinian complexity not only slows progress, but also creates systems that are brittle, compute-hungry, and hard to scale. Enter OmniTrain: a refreshingly elegant, end-to-end training recipe that unifies detection, grounding, and image-text alignment into a single pass. No pretraining-finetuning sandwich. No separate heads. Just a streamlined pipeline that can scale to hundreds of thousands of concepts — and outperform specialized systems while doing so. ...

July 27, 2025 · 3 min · Zelina
Cover image

Speed Bumps and Swells: Rethinking Optimal Trading with Stochastic Volatility

When markets move, they do so with both sudden shocks and slow drifts. Yet for years, much of optimal trading theory has treated volatility as if it were static—a constant backdrop rather than a dynamic participant in the game. The recent paper by Chan, Sircar, and Zimbidis decisively challenges that assumption by embedding multiscale stochastic volatility into a classical dynamic trading model. The result? A more nuanced, volatility-aware framework that adapts trading speed and target positions based on the fast and slow undulations of risk. ...

July 27, 2025 · 4 min · Zelina
Cover image

Stacking Alpha: How HARLF's Three-Tier Reinforcement Learner Beats the Market

The idea of merging language models and financial algorithms isn’t new — but HARLF takes it a step further by embedding them in a hierarchical reinforcement learning (HRL) framework that actually delivers. With a stunning 26% annualized ROI and a Sharpe ratio of 1.2, this isn’t just another LLM-meets-finance paper. It’s a blueprint for how sentiment and structure can be synergistically harnessed. From FinBERT to Fortune: Integrating Text with Tickers Most financial LLM pipelines stop at score generation: classify sentiment and call it a signal. But HARLF builds a full sentiment pipeline using FinBERT, generating monthly sentiment scores from scraped Google News articles for each of 14 assets. These scores aren’t just inputs — they form a complete observation vector that includes: ...

July 27, 2025 · 3 min · Zelina
Cover image

The Sentiment Edge: How FinDPO Trains LLMs to Think Like Traders

Financial markets don’t reward the loudest opinions. They reward the most timely, well-calibrated ones. FinDPO, a new framework by researchers from Imperial College London, takes this lesson seriously. It proposes a bold shift in how we train language models to read market sentiment. Rather than relying on traditional supervised fine-tuning (SFT), FinDPO uses Direct Preference Optimization (DPO) to align a large language model with how a human trader might weigh sentiment signals in context. And the results are not just academic — they translate into real money. ...

July 27, 2025 · 3 min · Zelina
Cover image

When Learning Goes Rogue: Fixing RL Biases in Economic Simulations

Reinforcement Learning (RL) has become a seductive tool for economists seeking to simulate adaptive behavior in dynamic, uncertain environments. But when it comes to modeling firms in equilibrium labor markets, this computational marriage reveals some serious incompatibilities. In a recent paper, Zhang and Chen expose two critical mismatches that emerge when standard RL is naively applied to simulate economic models — and offer a principled fix that merges the best of RL and economic theory. ...

July 27, 2025 · 4 min · Zelina
Cover image

Can You Spot the Bot? Why Detectability, Not Deception, Is the New AI Frontier

In an age where generative models can ace SATs, write novels, and mimic empathy, it’s no longer enough to ask, “Can an AI fool us?” The better question is: Can we still detect it when it does? That’s the premise behind the Dual Turing Test, a sharp reframing of the classic imitation game. Rather than rewarding AI for successfully pretending to be human, this framework challenges judges to reliably detect AI—even when its responses meet strict quality standards. ...

July 26, 2025 · 4 min · Zelina
Cover image

From Graph to Grit: Diagnosing Warehouse Bottlenecks with LLMs and Knowledge Graphs

In the age of Digital Twins and hyper-automated warehouses, simulations are everywhere—but insights are not. Discrete Event Simulations (DES) generate rich, micro-level data on logistics flows, delays, and resource utilization, yet interpreting these data remains painfully manual, fragile, and siloed. This paper from Quantiphi introduces a compelling solution: transforming raw simulation outputs into a Knowledge Graph (KG) and querying it via an LLM agent that mimics human investigative reasoning. It’s a shift from spreadsheet-style summaries to an interactive AI assistant that explains why something is slow, where the bottleneck is, and what needs attention. ...

July 26, 2025 · 3 min · Zelina
Cover image

Planners, Meet Your Smart Sidekick

Imagine asking, “Why wasn’t Order A scheduled for production yesterday?” and getting not just an answer, but a causal breakdown, an alternative plan, and a visual comparison — all without involving your operations research (OR) consultant. That’s exactly what SMARTAPS delivers. Built by Huawei researchers, SMARTAPS is a tool-augmented LLM interface for interacting with Advanced Planning Systems (APS) using natural language. It doesn’t try to replace optimization solvers — it simply makes them accessible. In doing so, it redefines how planners interact with complex decision-making models. ...

July 26, 2025 · 3 min · Zelina
Cover image

Steering by the Token: How GRAINS Turns Attribution into Alignment

Fine-tuning is the hammer; steering is the scalpel. In an era where models are increasingly opaque and high-stakes, we need tools that guide behavior without overhauling the entire architecture. That’s precisely what GRAINS (Gradient-based Attribution for Inference-Time Steering) delivers: a powerful, interpretable, and modular way to shift the behavior of LLMs and VLMs by leveraging the most fundamental unit of influence—the token. The Problem with Global Steering Traditional inference-time steering approaches often rely on global intervention vectors: a blunt, one-size-fits-all shift in hidden activations derived from paired desirable and undesirable examples. But these methods are insensitive to which specific tokens caused bad behavior. It’s like adjusting a recipe because the dish tastes bad—without checking if the salt or the sugar was at fault. ...

July 26, 2025 · 3 min · Zelina