Cover image

When Diffusion Learns How to Open Drawers

Opening — Why this matters now Embodied AI has a dirty secret: most simulated worlds look plausible until a robot actually tries to use them. Chairs block drawers, doors open into walls, and walkable space exists only in theory. As robotics shifts from toy benchmarks to household-scale deployment, this gap between visual realism and functional realism has become the real bottleneck. ...

January 14, 2026 · 3 min · Zelina
Cover image

When One Token Rules Them All: Diffusion Models and the Quiet Collapse of Composition

Opening — Why this matters now Text-to-image diffusion models are often marketed as masters of compositional imagination: just add more words, and the model will obligingly combine them into a coherent visual scene. In practice, however, this promise quietly collapses the moment multiple concepts compete for attention. A landmark swallows an object. An artist style erases the product. One concept wins, the other simply vanishes. ...

December 27, 2025 · 4 min · Zelina
Cover image

ImplicitRDP: When Robots Stop Guessing and Start Feeling

Opening — Why this matters now Robotic manipulation has always had a split personality. Vision plans elegantly in slow motion; force reacts brutally in real time. Most learning systems pretend this tension doesn’t exist — or worse, paper over it with handcrafted hierarchies. The result is robots that see the world clearly but still fumble the moment contact happens. ...

December 13, 2025 · 4 min · Zelina
Cover image

SceneMaker: When 3D Scene Generation Stops Guessing

Opening — Why this matters now Single-image 3D scene generation has quietly become one of the most overloaded promises in computer vision. We ask a model to hallucinate geometry, infer occluded objects, reason about spatial relationships, and place everything in a coherent 3D world — all from a single RGB frame. When it fails, we call it a data problem. When it half-works, we call it progress. ...

December 13, 2025 · 4 min · Zelina
Cover image

When Agents Think in Waves: Diffusion Models for Ad Hoc Teamwork

Opening — Why this matters now Collaboration is the final frontier of autonomy. As AI agents move from single-task environments to shared, unpredictable ones — driving, logistics, even disaster response — the question is no longer can they act, but can they cooperate? Most reinforcement learning (RL) systems still behave like lone wolves: excellent at optimization, terrible at teamwork. The recent paper PADiff: Predictive and Adaptive Diffusion Policies for Ad Hoc Teamwork proposes a striking alternative — a diffusion-based framework where agents learn not just to act, but to anticipate and adapt, even alongside teammates they’ve never met. ...

November 11, 2025 · 3 min · Zelina
Cover image

Remix, Don't Rebuild: How Zero-Shot AI Is Rewriting Music Editing

Opening — Why this matters now AI has already learned to compose music from scratch. But in the real world, musicians don’t start with silence—they start with a song. Editing, remixing, and reshaping sound are the true engines of creativity. Until recently, generative AI systems have failed to capture that nuance: they could dream up melodies, but not fine-tune a live jazz riff or turn a piano solo into an electric guitar line. ...

November 8, 2025 · 4 min · Zelina
Cover image

Noisy by Nature: Rethinking Financial Time Series Generation with GBM-Inspired Diffusion

Most generative models for time series—particularly those borrowed from image generation—treat financial prices like any other numerical data: throw in Gaussian noise, then learn to clean it up. But markets aren’t like pixels. Financial time series have unique structures: they evolve multiplicatively, exhibit heteroskedasticity, and follow stochastic dynamics that traditional diffusion models ignore. In this week’s standout paper, “A diffusion-based generative model for financial time series via geometric Brownian motion,” Kim et al. propose a subtle yet profound twist: model the noise using financial theory, specifically geometric Brownian motion (GBM), rather than injecting it naively. ...

August 2, 2025 · 3 min · Zelina
Cover image

Simulate First, Invest Later: How Diffusion Models Are Reinventing Portfolio Optimization

What if you could simulate thousands of realistic futures for the market, all conditioned on what’s happening today—and then train an investment strategy on those futures? That’s the central idea behind a bold new approach to portfolio optimization that blends score-based diffusion models with reinforcement learning, and it’s showing results that beat classic benchmarks like the S&P 500 and traditional Markowitz portfolios. ...

July 20, 2025 · 4 min · Zelina