Cover image

Agents on the Clock: How TPS-Bench Exposes the Time Management Problem in AI

Opening — Why this matters now AI agents can code, search, analyze data, and even plan holidays. But when the clock starts ticking, they often stumble. The latest benchmark from Shanghai Jiao Tong University — TPS-Bench (Tool Planning and Scheduling Benchmark) — measures whether large language model (LLM) agents can not only choose the right tools, but also use them efficiently in multi-step, real-world scenarios. The results? Let’s just say most of our AI “assistants” are better at thinking than managing their calendars. ...

November 6, 2025 · 3 min · Zelina
Cover image

Fast but Flawed: What Happens When AI Agents Try to Work Like Humans

AI’s impact on the workforce is no longer a speculative question—it’s unfolding in real time. But how do AI agents actually perform human work? A new study from Carnegie Mellon and Stanford, “How Do AI Agents Do Human Work?”, offers the first large-scale comparison of how humans and AI complete the same tasks across five essential skill domains: data analysis, engineering, computation, writing, and design. The findings are both promising and unsettling, painting a nuanced picture of a workforce in transition. ...

November 1, 2025 · 4 min · Zelina
Cover image

Branching Out of the Middle: How a ‘Tree of Agents’ Fixes Long-Context Blind Spots

TL;DR — Tree of Agents (TOA) splits very long documents into chunks, lets multiple agents read in different orders, shares evidence, prunes dead-ends, caches partial states, and then votes. The result: fewer hallucinations, resilience to the “lost in the middle” effect, and accuracy comparable to premium large models—while using a compact backbone. Why this matters for operators If your business parses contracts, annual reports, medical SOPs, or call-center transcripts, you’ve likely felt the pain of long-context LLMs: critical details buried mid-document get ignored; retrieval misses cross-paragraph logic; and bigger context windows inflate cost without guaranteeing better reasoning. TOA is a pragmatic middle path: it re-imposes structure on attention—not by scaling a single monolith, but by coordinating multiple lightweight readers with disciplined information exchange. ...

September 12, 2025 · 4 min · Zelina
Cover image

From Tokens to Teaspoons: What a Prompt Really Costs

Google’s new in‑production measurement rewrites how we think about the environmental footprint of AI serving—and how to buy it responsibly. Executive takeaways A typical prompt is cheaper than you think—if measured correctly. The median Gemini Apps text prompt (May 2025) used ~0.24 Wh of energy, ~0.03 gCO2e, and ~0.26 mL of water. That’s about the energy of watching ~9 seconds of TV and roughly five drops of water. Boundaries matter more than math. When you count only accelerator draw, you get ~0.10 Wh. Add host CPU/DRAM, idle reserve capacity, and data‑center overhead (PUE), and it rises to ~0.24 Wh. Same workload, different boundaries. Efficiency compounds across the stack. In one year, Google reports ~33× lower energy/prompt and ~44× lower emissions/prompt, driven by model/inference software, fleet utilization, cleaner power, and hardware generations. Action for buyers: Ask vendors to disclose measurement boundary, batching policy, TTM PUE/WUE, and market‑based emissions factors. Without these, numbers aren’t comparable. Why the world argued about “energy per prompt” Most public figures were estimates based on assumed GPUs, token lengths, and workloads. Real fleets don’t behave like lab benches. The biggest source of disagreement wasn’t arithmetic; it was the measurement boundary: ...

August 24, 2025 · 5 min · Zelina
Cover image

Fast & Curious: How ‘Speed-First’ LLM Architectures Change the Build vs. Buy Math

Executive takeaway: Efficient LLM architectures aren’t just academic: they reset the economics of AI products by cutting context costs, shrinking GPUs per QPS, and opening new form factors—from phone-side agents to ultra-cheap serverless endpoints. The winning strategy is hybrid by default, KV-light, and latency-budgeted. Why this matters now If you ship with AI, your margins live and die by three levers: sequence length, active parameters per token, and memory traffic. Classical Transformers lose on all three. The latest wave of “speed-first” designs offers a menu of swaps that trade negligible accuracy for step-change gains in throughput, tail latency, and $ per million tokens. This survey gives us a clean taxonomy and—more importantly—the design intent behind each family: compress the compute (linear & sparse sequence modeling), route the compute (MoE), restructure the compute (efficient full attention), and rethink the decoder (diffusion LLMs). ...

August 16, 2025 · 5 min · Zelina