Cover image

The Tower of Babble Gets a Router

Opening — Why this matters now Enterprise AI has a language problem. Not a charming one, like mispronouncing a French menu item with confidence. A structural one. Most companies do not operate in one clean English-speaking universe. Customer support conversations arrive in English, Tagalog, Spanish, Arabic, Thai, Vietnamese, Hindi, Indonesian, Turkish, and whatever dialectal mixture the internet felt like producing that morning. Compliance teams need summaries that preserve local meaning. E-commerce platforms need product search that understands regional idioms. Banks need customer explanations that do not flatten culture into machine-translated oatmeal. ...

May 1, 2026 · 16 min · Zelina
Cover image

Claw and Order: Why AI Agents Need a Precision Budget

Opening — Why this matters now AI agents are leaving the demo cage. They are no longer just politely completing prompts; they are planning workflows, calling tools, reading files, coordinating intermediate steps, and accumulating context like a bureaucrat hoarding PDFs. This is useful. It is also expensive. The paper “QuantClaw: Precision Where It Matters for OpenClaw” studies a problem that sounds technical but is really managerial: agent systems often run every task at a fixed numerical precision, even though not every task deserves the same computational budget.1 A safety-critical terminal command and a lightweight retrieval summary are not the same species of work. Treating them identically is the infrastructure equivalent of sending a limousine to deliver printer paper. ...

April 27, 2026 · 11 min · Zelina
Cover image

Cloudy With a Chance of Local Models: When On-Prem AI Starts Beating the API

Opening — Why this matters now For years, enterprise AI strategy has been framed as a binary choice: rent intelligence from cloud APIs, or spend lavishly recreating a miniature hyperscaler in-house. Charming fiction. A new benchmark on System Dynamics AI assistants suggests a third path is maturing quickly: highly capable local inference stacks running frontier open-source models on prosumer hardware. Not everywhere. Not universally. But enough to make procurement teams nervous and GPU vendors philosophical. ...

April 23, 2026 · 4 min · Zelina
Cover image

Flash Before the First Token: How FlashPrefill Rewrites the Economics of Long Context

Opening — Why this matters now Large Language Models are steadily marching toward million‑token contexts. The promise is seductive: entire codebases, legal archives, or research libraries available inside a single prompt. The reality, however, is less glamorous. Before a model generates its first token, it must prefill the entire prompt into the Transformer. This stage alone can dominate inference latency for long documents. Because attention scales quadratically with sequence length, doubling the context can quadruple the compute. ...

March 10, 2026 · 5 min · Zelina
Cover image

Ultra‑Sparse Embeddings Without Apology

Opening — Why this matters now Embeddings have quietly become the metabolic system of modern AI. Every retrieval query, recommendation list, and ranking pipeline depends on them—yet we keep feeding these systems increasingly obese vectors. Thousands of dimensions, dense everywhere, expensive always. The paper behind CSRv2 arrives with an unfashionable claim: you can make embeddings extremely sparse and still win. ...

February 8, 2026 · 3 min · Zelina
Cover image

When Retrieval Learns to Breathe: Teaching LLMs to Go Wide *and* Deep

Opening — Why this matters now Large language models are no longer starved for text. They are starved for structure. As RAG systems mature, the bottleneck has shifted from whether we can retrieve information to how we decide where to look first, how far to go, and when to stop. Most retrieval stacks still force an early commitment: either search broadly and stay shallow, or traverse deeply and hope you picked the right starting point. ...

January 21, 2026 · 4 min · Zelina
Cover image

FAQ It Till You Make It: Fixing LLM Quantization by Teaching Models Their Own Family History

Opening — Why this matters now Large language models are getting cheaper to run, not because GPUs suddenly became charitable, but because we keep finding new ways to make models forget precision without forgetting intelligence. Post-training quantization (PTQ) is one of the most effective tricks in that playbook. And yet, despite years of algorithmic polish, PTQ still trips over something embarrassingly mundane: the calibration data. ...

January 20, 2026 · 4 min · Zelina
Cover image

Let It Flow: ROME and the Economics of Agentic Craft

Opening — Why this matters now 2025 quietly settled an uncomfortable truth in AI: agents are not products, they are supply chains. Anyone can demo a tool-using model. Very few can make it survive contact with real environments, long-horizon tasks, and users who refuse to behave like benchmarks. The paper “Let It Flow: Agentic Crafting on Rock and Roll” arrives at exactly this inflection point. Instead of promising yet another agent, it asks a more grown-up question: what kind of ecosystem is required to reliably produce agents at scale? ...

January 1, 2026 · 3 min · Zelina
Cover image

Agents All the Way Down: When Science Becomes Executable

Opening — Why this matters now For years, AI for Science has celebrated isolated breakthroughs: a protein folded faster, a material screened earlier, a simulation accelerated. Impressive—yet strangely unsatisfying. Real science does not happen in single model calls. It unfolds across reading, computing, experimentation, validation, revision, and institutional memory. The uncomfortable truth is this: as AI accelerates scientific output, it is quietly breaking the human systems meant to verify it. Peer review strains. Reproducibility weakens. “It worked once” becomes the dominant success metric. ...

December 24, 2025 · 3 min · Zelina
Cover image

From Tokens to Teaspoons: What a Prompt Really Costs

Google’s new in‑production measurement rewrites how we think about the environmental footprint of AI serving—and how to buy it responsibly. Executive takeaways A typical prompt is cheaper than you think—if measured correctly. The median Gemini Apps text prompt (May 2025) used ~0.24 Wh of energy, ~0.03 gCO2e, and ~0.26 mL of water. That’s about the energy of watching ~9 seconds of TV and roughly five drops of water. Boundaries matter more than math. When you count only accelerator draw, you get ~0.10 Wh. Add host CPU/DRAM, idle reserve capacity, and data‑center overhead (PUE), and it rises to ~0.24 Wh. Same workload, different boundaries. Efficiency compounds across the stack. In one year, Google reports ~33× lower energy/prompt and ~44× lower emissions/prompt, driven by model/inference software, fleet utilization, cleaner power, and hardware generations. Action for buyers: Ask vendors to disclose measurement boundary, batching policy, TTM PUE/WUE, and market‑based emissions factors. Without these, numbers aren’t comparable. Why the world argued about “energy per prompt” Most public figures were estimates based on assumed GPUs, token lengths, and workloads. Real fleets don’t behave like lab benches. The biggest source of disagreement wasn’t arithmetic; it was the measurement boundary: ...

August 24, 2025 · 5 min · Zelina