Cover image

Bias on Demand: When Synthetic Data Exposes the Moral Logic of AI Fairness

Bias on Demand: When Synthetic Data Exposes the Moral Logic of AI Fairness In the field of machine learning, fairness is often treated as a technical constraint — a line of code to be added, a metric to be optimized. But behind every fairness metric lies a moral stance: what should be equalized, for whom, and at what cost? The paper “Bias on Demand: A Modelling Framework that Generates Synthetic Data with Bias” (Baumann et al., FAccT 2023) breaks this technical illusion by offering a framework that can manufacture bias in data — deliberately, transparently, and with philosophical intent. ...

November 2, 2025 · 4 min · Zelina
Cover image

Echo Chamber in a Prompt: How Survey Bias Creeps into LLMs

Large Language Models (LLMs) are increasingly deployed as synthetic survey respondents in social science and policy research. But a new paper by Rupprecht, Ahnert, and Strohmaier raises a sobering question: are these AI “participants” reliable, or are we just recreating human bias in silicon form? By subjecting nine LLMs—including Gemini, Llama-3 variants, Phi-3.5, and Qwen—to over 167,000 simulated interviews from the World Values Survey, the authors expose a striking vulnerability: even state-of-the-art LLMs consistently fall for classic survey biases—especially recency bias. ...

July 11, 2025 · 3 min · Zelina