AGI by Committee: Why the First General Intelligence Won’t Arrive Alone
Opening — Why this matters now For years, AGI safety discussions have revolved around a single, looming figure: the model. One system. One alignment problem. One decisive moment. That mental model is tidy — and increasingly wrong. The paper “Distributional AGI Safety” argues that AGI is far more likely to emerge not as a monolith, but as a collective outcome: a dense web of specialized, sub‑AGI agents coordinating, trading capabilities, and assembling intelligence the way markets assemble value. AGI, in this framing, is not a product launch. It is a phase transition. ...