When Aligned Models Compete: Nash Equilibria as the New Alignment Layer
Opening — Why this matters now Alignment used to be a single‑model problem. Train the model well, filter the data, tune the reward, and call it a day. That framing quietly breaks the moment large language models stop acting alone. As LLMs increasingly operate as populations—running accounts, agents, bots, and copilots that interact, compete, and imitate—alignment becomes a system‑level phenomenon. Even perfectly aligned individual models can collectively drift into outcomes no one explicitly asked for. ...