Cover image

Thinking Isn’t Free: Why Chain-of-Thought Hits a Hard Wall

Opening — Why this matters now Inference-time reasoning has quietly become the dominant performance lever for frontier language models. When benchmarks get hard, we don’t retrain—we let models think longer. More tokens, more scratchpad, more compute. The industry narrative is simple: reasoning scales, so accuracy scales. This paper asks an uncomfortable question: how long must a model think, at minimum, as problems grow? And the answer, grounded in theory rather than vibes, is not encouraging. ...

February 5, 2026 · 3 min · Zelina
Cover image

Kernel Kombat: How Multi‑Agent LLMs Squeeze 1.32× More From Your GPUs

TL;DR Astra is a multi‑agent LLM system that optimizes existing CUDA kernels instead of generating them from PyTorch. On three production‑relevant SGLang kernels, it delivered 1.32× average speedup (up to 1.46×) without fine‑tuning—just structured zero‑shot prompting. The win isn’t a single trick; it’s a division of labor: testing, profiling, planning, and coding each handled by a specialized agent that iterates toward faster, still‑correct kernels. Why this matters for business readers GPU efficiency is the new gross margin. If your serving stack pushes trillions of tokens per day, a 25–45% kernel‑level speedup compounds into: ...

September 13, 2025 · 4 min · Zelina
Cover image

Fast & Curious: How ‘Speed-First’ LLM Architectures Change the Build vs. Buy Math

Executive takeaway: Efficient LLM architectures aren’t just academic: they reset the economics of AI products by cutting context costs, shrinking GPUs per QPS, and opening new form factors—from phone-side agents to ultra-cheap serverless endpoints. The winning strategy is hybrid by default, KV-light, and latency-budgeted. Why this matters now If you ship with AI, your margins live and die by three levers: sequence length, active parameters per token, and memory traffic. Classical Transformers lose on all three. The latest wave of “speed-first” designs offers a menu of swaps that trade negligible accuracy for step-change gains in throughput, tail latency, and $ per million tokens. This survey gives us a clean taxonomy and—more importantly—the design intent behind each family: compress the compute (linear & sparse sequence modeling), route the compute (MoE), restructure the compute (efficient full attention), and rethink the decoder (diffusion LLMs). ...

August 16, 2025 · 5 min · Zelina