Cover image

The Rational Illusion: How LLMs Outplayed Humans at Cooperation

Opening — Why this matters now As AI systems begin to act on behalf of humans—negotiating, advising, even judging—the question is no longer can they make rational decisions, but whose rationality they follow. A new study from the Barcelona Supercomputing Center offers a fascinating glimpse into this frontier: large language models (LLMs) can now replicate and predict human cooperation across classical game theory experiments. In other words, machines are beginning to play social games the way we do—irrational quirks and all. ...

November 7, 2025 · 4 min · Zelina
Cover image

Seeing Is Deceiving: Diagnosing and Fixing Hallucinations in Multimodal AI

“I See What I Want to See” Modern multimodal large language models (MLLMs)—like GPT-4V, Gemini, and LLaVA—promise to “understand” images. But what happens when their eyes lie? In many real-world cases, MLLMs generate fluent, plausible-sounding responses that are visually inaccurate or outright hallucinated. That’s a problem not just for safety, but for trust. A new paper titled “Understanding, Localizing, and Mitigating Hallucinations in Multimodal Large Language Models” introduces a systematic approach to this growing issue. It moves beyond just counting hallucinations and instead offers tools to diagnose where they come from—and more importantly, how to fix them. ...

August 5, 2025 · 3 min · Zelina