Many Roads? Not Quite: Why LLM Alignment May Prefer a Single Moral Lane
Opening — Why this matters now The modern AI alignment debate often assumes something intuitive: moral reasoning is messy. Unlike mathematics, ethics rarely has a single correct answer. If multiple ethical frameworks can justify different conclusions, then the algorithms training large language models (LLMs) should presumably encourage diversity in reasoning. At least, that was the prevailing theory. ...