Aligned or Just Agreeable? Why Accuracy Is a Terrible Proxy for AI–Human Alignment
Opening — Why this matters now As large language models quietly migrate from text generators to decision makers, the industry has developed an unhealthy obsession with the wrong question: Did the model choose the same option as a human? Accuracy, F1, and distributional overlap have become the default proxies for alignment. They are also deeply misleading. ...