Cosmos Policy: When Video Models Stop Watching and Start Acting
Opening — Why this matters now Robotics has quietly entered an awkward phase. Models can see remarkably well and talk impressively about tasks—but when it comes to executing long-horizon, high-precision actions in the physical world, performance still collapses in the details. Grasp slips. Motions jitter. Multimodal uncertainty wins. At the same time, video generation models have undergone a renaissance. Large diffusion-based video models now encode temporal causality, implicit physics, and motion continuity at a scale robotics has never had access to. The obvious question follows: ...