Cover image

When ESG Meets LLM: Decoding Corporate Green Talk on Social Media

Opening — Why this matters now Corporate sustainability is having a content crisis. Brands flood X (formerly Twitter) with green-themed posts, pledging allegiance to the UN’s Sustainable Development Goals (SDGs) while their real-world actions remain opaque. The question is no longer who is talking about sustainability—it’s what they are actually saying, and whether it means anything at all. A new study from the University of Amsterdam offers a data-driven lens on this problem. By combining large language models (LLMs) and vision-language models (VLMs), the researchers have built a multimodal pipeline that decodes the texture of corporate sustainability messaging across millions of social media posts. Their goal: to map not what companies claim, but how they construct the narrative of being sustainable. ...

November 6, 2025 · 4 min · Zelina
Cover image

Seeing Green: When AI Learns to Detect Corporate Illusions

Seeing Green: When AI Learns to Detect Corporate Illusions Oil and gas companies have long mastered the art of framing—selectively showing the parts of reality they want us to see. A commercial fades in: wind turbines turning under a soft sunrise, a child running across a field, the logo of an oil major shimmering on the horizon. No lies are spoken, but meaning is shaped. The message? We care. The reality? Often less so. ...

October 31, 2025 · 4 min · Zelina
Cover image

From Tokens to Teaspoons: What a Prompt Really Costs

Google’s new in‑production measurement rewrites how we think about the environmental footprint of AI serving—and how to buy it responsibly. Executive takeaways A typical prompt is cheaper than you think—if measured correctly. The median Gemini Apps text prompt (May 2025) used ~0.24 Wh of energy, ~0.03 gCO2e, and ~0.26 mL of water. That’s about the energy of watching ~9 seconds of TV and roughly five drops of water. Boundaries matter more than math. When you count only accelerator draw, you get ~0.10 Wh. Add host CPU/DRAM, idle reserve capacity, and data‑center overhead (PUE), and it rises to ~0.24 Wh. Same workload, different boundaries. Efficiency compounds across the stack. In one year, Google reports ~33× lower energy/prompt and ~44× lower emissions/prompt, driven by model/inference software, fleet utilization, cleaner power, and hardware generations. Action for buyers: Ask vendors to disclose measurement boundary, batching policy, TTM PUE/WUE, and market‑based emissions factors. Without these, numbers aren’t comparable. Why the world argued about “energy per prompt” Most public figures were estimates based on assumed GPUs, token lengths, and workloads. Real fleets don’t behave like lab benches. The biggest source of disagreement wasn’t arithmetic; it was the measurement boundary: ...

August 24, 2025 · 5 min · Zelina