Cover image

Pooling Resources: UniPool and the MoE Budget Nobody Wanted to Audit

Opening — Why this matters now AI infrastructure has entered its spreadsheet era. Not the glamorous spreadsheet, where revenue projections grow diagonally upward and nobody asks where the assumptions came from. The other spreadsheet: the one where compute cost, memory footprint, inference latency, training instability, and model quality all insist on appearing in the same row. ...

May 9, 2026 · 16 min · Zelina
Cover image

Place Your Experts, Not Your Bets

Opening — Why this matters now The fashionable version of AI strategy still sounds suspiciously like a gym membership pitch: bigger model, more parameters, more GPUs, more everything. The operational version is less glamorous and much more important: where does the computation happen, which parts of the model are actually used, how predictable is demand, and whether the system can turn those facts into lower latency, lower cost, or better decisions. ...

May 7, 2026 · 13 min · Zelina
Cover image

The Tower of Babble Gets a Router

Opening — Why this matters now Enterprise AI has a language problem. Not a charming one, like mispronouncing a French menu item with confidence. A structural one. Most companies do not operate in one clean English-speaking universe. Customer support conversations arrive in English, Tagalog, Spanish, Arabic, Thai, Vietnamese, Hindi, Indonesian, Turkish, and whatever dialectal mixture the internet felt like producing that morning. Compliance teams need summaries that preserve local meaning. E-commerce platforms need product search that understands regional idioms. Banks need customer explanations that do not flatten culture into machine-translated oatmeal. ...

May 1, 2026 · 16 min · Zelina
Cover image

Divide, Route, and Conquer: DriftMoE's Smart Take on Concept Drift

Concept drift is the curse of the real world. Models trained on yesterday’s data go stale in hours, sometimes minutes. Traditional remedies like Adaptive Random Forests (ARF) respond reactively, detecting change and resetting trees. But what if the system could instead continuously learn where to look, dynamically routing each new sample to the right expert — no drift detector required? That’s exactly the ambition behind DriftMoE, a Mixture-of-Experts framework purpose-built for online learning in non-stationary environments. Co-developed by researchers at Ireland’s CeADAR, this architecture marries lightweight neural routing with classic Hoeffding trees, achieving expert specialization as a byproduct of learning — not as a bolted-on correction. ...

July 27, 2025 · 3 min · Zelina