Cover image

When Democracy Meets the Algorithm: Auditing Representation in the Age of LLMs

Opening — Why this matters now The rise of AI in civic life has been faster than most democracies can legislate. Governments and NGOs are experimenting with large language models (LLMs) to summarize public opinions, generate consensus statements, and even draft expert questions in citizen assemblies. The promise? Efficiency and inclusiveness. The risk? Representation by proxy—where the algorithm decides whose questions matter. The new paper Question the Questions: Auditing Representation in Online Deliberative Processes (De et al., 2025) offers a rigorous framework for examining that risk. It turns the abstract ideals of fairness and inclusivity into something measurable, using the mathematics of justified representation (JR) from social choice theory. In doing so, it shows how to audit whether AI-generated “summary questions” in online deliberations truly reflect the people’s diverse concerns—or just the most statistically coherent subset. ...

November 7, 2025 · 4 min · Zelina
Cover image

Automate All the Things? Mind the Blind Spots

Automation is a superpower—but it’s also a blindfold. New AI “scientist” stacks promise to go from prompt → idea → code → experiments → manuscript with minimal human touch. Today’s paper shows why that convenience can quietly erode scientific integrity—and, by extension, the credibility of any product decisions built on top of it. The punchline: the more you automate, the less you see—unless you design for visibility from day one. ...

September 14, 2025 · 4 min · Zelina
Cover image

Don't Trust. Verify: Fighting Financial Hallucinations with FRED

When ChatGPT makes up a statistic or misstates a date, it’s annoying. But when a financial assistant claims the wrong interest expense or misattributes a revenue source, it could move markets or mislead clients. This is the stark reality FRED confronts head-on. FRED—short for Financial Retrieval-Enhanced Detection and Editing—is a framework fine-tuned to spot and fix factual errors in financial LLM outputs. Developed by researchers at Pegasi AI, it isn’t just another hallucination detection scheme. It’s an auditor with a domain-specific brain. ...

July 29, 2025 · 3 min · Zelina