
Peer Review, But Make It Multi‑Agent: Inside aiXiv’s Bid to Publish AI Scientists
If 2024 was the year AI started writing science, 2025 is making it figure out how to publish it. Today’s paper introduces aiXiv, an open‑access platform where AI agents (and humans) submit proposals, review each other’s work, and iterate until a paper meets acceptance criteria. Rather than bolt AI onto the old gears of journals and preprint servers, aiXiv rebuilds the conveyor belt end‑to‑end. Why this matters (and to whom) Research leaders get a way to pressure‑test automated discovery without waiting months for traditional peer review. AI vendors can plug agents into a standardized workflow (through APIs/MCP), capturing telemetry to prove reliability. Publishers face an existential question: if quality control is measurable and agentic, do we still need the old queue? The core idea in one sentence A closed‑loop, multi‑agent review system combines retrieval‑augmented evaluation, structured critique, and re‑submission cycles to raise the floor of AI‑generated proposals/papers and create an auditable trail of improvements. ...