Cover image

From Tokens to Teaspoons: What a Prompt Really Costs

Google’s new in‑production measurement rewrites how we think about the environmental footprint of AI serving—and how to buy it responsibly. Executive takeaways A typical prompt is cheaper than you think—if measured correctly. The median Gemini Apps text prompt (May 2025) used ~0.24 Wh of energy, ~0.03 gCO2e, and ~0.26 mL of water. That’s about the energy of watching ~9 seconds of TV and roughly five drops of water. Boundaries matter more than math. When you count only accelerator draw, you get ~0.10 Wh. Add host CPU/DRAM, idle reserve capacity, and data‑center overhead (PUE), and it rises to ~0.24 Wh. Same workload, different boundaries. Efficiency compounds across the stack. In one year, Google reports ~33× lower energy/prompt and ~44× lower emissions/prompt, driven by model/inference software, fleet utilization, cleaner power, and hardware generations. Action for buyers: Ask vendors to disclose measurement boundary, batching policy, TTM PUE/WUE, and market‑based emissions factors. Without these, numbers aren’t comparable. Why the world argued about “energy per prompt” Most public figures were estimates based on assumed GPUs, token lengths, and workloads. Real fleets don’t behave like lab benches. The biggest source of disagreement wasn’t arithmetic; it was the measurement boundary: ...

August 24, 2025 · 5 min · Zelina
Cover image

RAG in the Wild: When More Knowledge Hurts

Retrieval-Augmented Generation (RAG) is often hailed as a cure-all for domain adaptation and factual accuracy in large language models (LLMs). By injecting external context at inference time, RAG systems promise to boost performance on knowledge-intensive tasks. But a new paper, RAG in the Wild (Xu et al., 2025), reveals that this promise is brittle when we leave the sanitized lab environment and enter the real world of messy, multi-source knowledge. ...

July 29, 2025 · 4 min · Zelina
Cover image

From Cora to Cosmos: How PyG 2.0 Scales GNNs for the Real World

Graph Neural Networks (GNNs) have come a long way since they solved Cora and PubMed node classification. But what happens when you want to model an entire traffic network, a biomedical knowledge graph, or a social graph with billions of nodes? That’s where PyG 2.0 steps in. The Industrialization of GNNs PyTorch Geometric (PyG) has been a dominant tool in the academic development of GNNs. With PyG 2.0, it graduates into the world of industrial-strength machine learning. This isn’t just a library update—it’s a fundamental re-architecture with three goals: ...

July 24, 2025 · 3 min · Zelina
Cover image

Break-Even the Machine: Strategic Thinking in the Age of High-Cost AI

Introduction Generative AI continues to impress with its breadth of capabilities—from drafting reports to designing presentations. Yet despite these advances, it is crucial to understand the evolving cost structure, risk exposure, and strategic options businesses face before committing to full-scale AI adoption. This article offers a structured approach for business leaders and AI startups to evaluate where and when generative AI deployment makes sense. We explore cost-performance tradeoffs, forward-looking cost projections, tangible ROI examples, and differentiation strategies in a rapidly changing ecosystem. ...

March 27, 2025 · 4 min · Cognaptus Insights