When Fairness Fails in Groups: From Lone Counterexamples to Discrimination Clusters
Opening — Why this matters now Most algorithmic fairness debates still behave as if discrimination is a rounding error: rare, isolated, and best handled by catching a few bad counterexamples. Regulators ask whether a discriminatory case exists. Engineers ask whether any unfair input pair can be found. Auditors tick the box once a model is declared “2-fair.” ...