Simulate First, Invest Later: How Diffusion Models Are Reinventing Portfolio Optimization

What if you could simulate thousands of realistic futures for the market, all conditioned on what’s happening today—and then train an investment strategy on those futures? That’s the central idea behind a bold new approach to portfolio optimization that blends score-based diffusion models with reinforcement learning, and it’s showing results that beat classic benchmarks like the S&P 500 and traditional Markowitz portfolios. ...

July 20, 2025 · 4 min · Zelina

Fine-Tuning Isn’t Just Supervised: Why SFT Is Really RL in Disguise

In the arms race to align large language models (LLMs), supervised fine-tuning (SFT) and reinforcement learning (RL) are often painted as competing paradigms. SFT is praised for its stability and simplicity; RL is heralded for its theoretical soundness and alignment fidelity. But what if this dichotomy is an illusion? A recent preprint from Chongli Qin and Jost Tobias Springenberg makes a bold and elegant claim: SFT on curated data is not merely supervised learning—it is actually optimizing a lower bound on the RL objective. ...

July 18, 2025 · 4 min · Zelina

Train of Thought: How Long-Haul RL Unlocks LLM Reasoning Diversity

In the race to make Large Language Models (LLMs) reason like humans—or better—most researchers obsess over one thing: prompting. Chain-of-thoughts, few-shot demos, scratchpads, tools. But a new study from NVIDIA suggests something even more fundamental: it’s not just how you prompt them—it’s how long you train them. Their paper, Scaling Up RL: Unlocking Diverse Reasoning in LLMs via Prolonged Training, explores how stretching reinforcement learning (RL) over time unlocks broader, more stable, and more versatile reasoning in LLMs. This isn’t just about incremental gains—it’s about escaping reasoning ruts. ...

July 18, 2025 · 3 min · Zelina

Memory Games: The Data Contamination Crisis in Reinforcement Learning

Reinforcement learning (RL) has recently emerged as the favored path to boost large language models’ reasoning abilities. The latest headline-grabbing claim? That even random or incorrect reward signals can help models like Qwen2.5 become better reasoners. But a new paper, “Reasoning or Memorization?”, cuts through the hype—and it does so with scalpel-like precision. It reveals that what we thought were signs of emergent reasoning in Qwen2.5 might, in fact, be a textbook case of data contamination. If true, the implications are serious: much of what we thought we knew about RL-driven reasoning gains could be little more than sophisticated memory retrieval. ...

July 15, 2025 · 3 min · Zelina

Reasoning at Scale: How DeepSeek Redefines the LLM Playbook

If GPT-4 was the apex of pretraining, DeepSeek might be the blueprint for what comes next. Released in two families—DeepSeek-V3 and DeepSeek-R1—this Chinese open-source model series isn’t just catching up to frontier LLMs. It’s reshaping the paradigm entirely. By sidestepping traditional supervised fine-tuning in favor of reinforcement learning (RL), and coupling it with memory-efficient innovations like Multi-head Latent Attention (MLA) and cost-efficient training techniques like FP8 mixed precision and fine-grained MoE, DeepSeek models demonstrate how strategic architectural bets can outpace brute-force scale. ...

July 15, 2025 · 3 min · Zelina

Backtrack to the Future: How ASTRO Teaches LLMs to Think Like Search Algorithms

A persistent mystery in the recent surge of reasoning-augmented LLMs—like OpenAI’s o1 or DeepSeek-R1—is whether these models learn to reason through post hoc reinforcement fine-tuning, or if they were already good at it to begin with. ASTRO offers a rare counter-example: a method that imbues non-reasoner LLMs (like vanilla Llama 3) with structured reasoning behavior from scratch. Rather than rely on emergent capabilities or distillation from models that already search well, ASTRO teaches LLMs to think like search algorithms themselves, using a hybrid approach combining Monte Carlo Tree Search (MCTS), procedure cloning, chain-of-thought generation, and reinforcement learning with verifiable rewards. ...

July 7, 2025 · 3 min · Zelina

Talk is Flight: How RALLY Bridges Language and Learning in UAV Swarms

When language models take flight, consensus becomes not just possible, but programmable. Modern UAV swarms face the daunting task of coordinating across partial observability, adversarial threats, and shifting missions. Traditional Multi-Agent Reinforcement Learning (MARL) offers adaptability, but falters when role differentiation or semantic reasoning is required. Large Language Models (LLMs), meanwhile, understand tasks and intent—but lack grounded, online learning. RALLY (Role-Adaptive LLM-Driven Yoked Navigation) is the first framework to successfully integrate these two paradigms, enabling real-time, role-aware collaboration in UAV swarms. ...

July 7, 2025 · 3 min · Zelina

Residual Learning: How Reinforcement Learning Is Speeding Up Portfolio Math

What if the hardest part of finance isn’t prediction, but precision? Behind every real-time portfolio adjustment or split-second options quote lies a giant math problem: solving Ax = b, where A is large, sparse, and often very poorly behaved. In traditional finance pipelines, iterative solvers like GMRES or its flexible cousin FGMRES are tasked with solving these linear systems — be it from a Markowitz portfolio optimization or a discretized Black–Scholes PDE for option pricing. But when the matrix A is ill-conditioned (which it often is), convergence slows to a crawl. Preconditioning helps, but tuning these parameters is more art than science — until now. ...

July 6, 2025 · 3 min · Zelina

Memory Over Matter: How MemAgent Redefines Long-Context Reasoning with Reinforcement Learning

Handling long documents has always been a source of frustration for large language models (LLMs). From brittle extrapolation hacks to obscure compression tricks, the field has often settled for awkward compromises. But the paper MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent boldly reframes the problem: what if LLMs could read like humans—absorbing information chunk by chunk, jotting down useful notes, and focusing on what really matters? At the heart of MemAgent is a surprisingly elegant idea: treat memory not as an architectural afterthought but as an agent policy to be trained. Instead of trying to scale attention across millions of tokens, MemAgent introduces a reinforcement-learning-shaped overwriteable memory that allows an LLM to iteratively read arbitrarily long documents in segments. It learns—through reward signals—what to keep and what to discard. ...

July 4, 2025 · 4 min · Zelina

The Reasoning Gymnasium: How Zero-Sum Games Shape Smarter LLMs

If the future of reasoning in large language models (LLMs) doesn’t lie in human-tweaked datasets or carefully crafted benchmarks, where might it emerge? According to SPIRAL, a recent framework introduced by Bo Liu et al., the answer is clear: in games. SPIRAL (Self-Play on zero-sum games Incentivizes Reasoning via multi-Agent muLti-turn reinforcement learning) proposes that competitive, turn-based, two-player games can become a reasoning gymnasium for LLMs. It provides an automated and scalable path for cognitive skill acquisition, sidestepping human-curated data and rigid reward functions. ...

July 1, 2025 · 4 min · Zelina