Cover image

Enemy at the Gates, Friends at the Table: Why Competition Makes LLM Agents More Cooperative

TL;DR When language‑model agents compete as teams and meet the same opponents repeatedly, they cooperate more—even on the very first encounter. This “super‑additive” effect reliably appears for Qwen3 and Phi‑4, and changes how we should structure agent ecosystems at work. Why this matters (for builders and buyers) Most enterprise agent stacks still optimize solo intelligence (one bot per task). But real workflows are competitive–cooperative: sales vs. sales, negotiators vs. suppliers, ops vs. delays. This paper shows that if we architect the social rules (teams + rematches) rather than just tune models, we can raise cooperative behavior and stability without extra fine‑tuning—or even bigger models. ...

August 24, 2025 · 4 min · Zelina
Cover image

Game of Prompts: How Game Theory and Agentic LLMs Are Rewriting Cybersecurity

In today’s threat landscape, cybersecurity is no longer a battle of scripts and firewalls. It’s a war of minds. And with the rise of intelligent agents powered by Large Language Models (LLMs), we are now entering a new era where cyber defense becomes not just technical but deeply strategic. The paper “Game Theory Meets LLM and Agentic AI” by Quanyan Zhu provides one of the most profound frameworks yet for understanding this shift. ...

July 16, 2025 · 4 min · Zelina
Cover image

Outrun the Herd, Not the Lion: A Smarter AI Strategy for Business Games

In the wild, survival doesn’t require you to outrun the lion—it just means outrunning the slowest gazelle. Surprisingly, this logic also applies to business strategy. When we introduce AI into business decision-making, we’re not just dealing with isolated optimization problems—we’re engaging in a complex game, with rivals, competitors, and market players who also make moves. One key trap in this game is assuming that opponents are perfect. That assumption sounds safe—but it can be paralyzing. ...

April 13, 2025 · 6 min