Cover image

SAGA, Not Sci‑Fi: When LLMs Start Doing Science

Opening — Why this matters now For years, we have asked large language models to explain science. The paper behind SAGA asks a more uncomfortable question: what happens when we ask them to do science instead? Scientific discovery has always been bottlenecked not by ideas, but by coordination — between hypothesis generation, experiment design, evaluation, and iteration. SAGA reframes this entire loop as an agentic system problem. Not a chatbot. Not a single model. A laboratory of cooperating AI agents. ...

December 29, 2025 · 3 min · Zelina