Cover image

When AI Argues Back: The Promise and Peril of Evidence-Based Multi-Agent Debate

Opening — Why this matters now The world doesn’t suffer from a lack of information—it suffers from a lack of agreement about what’s true. From pandemic rumors to political spin, misinformation now spreads faster than correction, eroding trust in institutions and even in evidence itself. As platforms struggle to moderate and fact-check at scale, researchers have begun asking a deeper question: Can AI not only detect falsehoods but also argue persuasively for the truth? ...

November 11, 2025 · 4 min · Zelina
Cover image

Humans in the Loop, Not Just the Dataset

When Meta and other tech giants scale back content moderation, the gap isn’t just technical—it’s societal. Civil society organizations (CSOs), not corporations, are increasingly on the frontlines of monitoring online extremism. But they’re often armed with clunky tools, academic prototypes, or opaque black-box models. A new initiative—highlighted in Civil Society in the Loop—challenges this status quo by co-designing a Telegram monitoring tool that embeds human feedback directly into its LLM-assisted classification system. The twist? It invites civil society into the machine learning loop, not just the results screen. ...

July 10, 2025 · 3 min · Zelina