Cover image

Brains Meet Brains: When LLMs Sit on Top of Supply Chain Optimizers

TL;DR Pair a classic mixed‑integer inventory redistribution model with an LLM-driven context layer and you get explainable optimization: the math still finds near‑optimal transfers, while the LLM translates them into role‑aware narratives, KPIs, and visuals. The result is faster buy‑in, fewer “why this plan?” debates, and tighter execution. Why this paper matters for operators Most planners don’t read constraint matrices. They read stockout risks, truck rolls, and WOS. The study demonstrates a working system where: ...

September 1, 2025 · 5 min · Zelina
Cover image

Skip or Split? How LLMs Can Make Old-School Planners Run Circles Around Complexity

TL;DR Classical planners crack under scale. You can rescue them with LLMs in two ways: (1) Inspire the next action, or (2) Predict an intermediate state and split the search. On diverse benchmarks (Blocks, Logistics, Depot, Mystery), the Predict route generally solves more cases with fewer LLM calls, except when domain semantics are opaque. For enterprise automation, this points to a practical recipe: decompose → predict key waypoints → verify with a trusted solver—and only fall back to “inspire” when your domain model is thin. ...

August 18, 2025 · 5 min · Zelina
Cover image

From Tadpole to Titan: How DEVFT Grows LLMs Like a Brain

If federated fine-tuning feels like trying to teach calculus to a toddler on a flip phone, you’re not alone. While the privacy-preserving benefits of federated learning are clear, its Achilles’ heel has always been the immense cost of training large models like LLaMA2-13B across resource-starved edge devices. Now, a new method—DEVFT (Developmental Federated Tuning)—offers a compelling paradigm shift, not by upgrading the devices, but by downgrading the expectations. At least, at first. ...

August 4, 2025 · 3 min · Zelina
Cover image

Residual Learning: How Reinforcement Learning Is Speeding Up Portfolio Math

What if the hardest part of finance isn’t prediction, but precision? Behind every real-time portfolio adjustment or split-second options quote lies a giant math problem: solving Ax = b, where A is large, sparse, and often very poorly behaved. In traditional finance pipelines, iterative solvers like GMRES or its flexible cousin FGMRES are tasked with solving these linear systems — be it from a Markowitz portfolio optimization or a discretized Black–Scholes PDE for option pricing. But when the matrix A is ill-conditioned (which it often is), convergence slows to a crawl. Preconditioning helps, but tuning these parameters is more art than science — until now. ...

July 6, 2025 · 3 min · Zelina
Cover image

Evolving Beyond Bottlenecks: How Agentic Workflows Revolutionize Optimization

Traditionally, solving optimization problems involves meticulous human effort: crafting mathematical models, selecting appropriate algorithms, and painstakingly tuning hyperparameters. Despite the rigor, these human-centric processes are prone to bottlenecks, limiting the industrial adoption of cutting-edge optimization techniques. Wenhao Li and colleagues 1 challenge this paradigm in their recent paper, proposing an innovative shift toward evolutionary agentic workflows, powered by foundation models (FMs) and evolutionary algorithms. Understanding the Optimization Space Optimization problems typically traverse four interconnected spaces: ...

May 8, 2025 · 3 min