Cover image

Safety Without Exploration: Teaching Robots Where Not to Die

Opening — Why this matters now Modern autonomy has a credibility problem. We train systems in silico, deploy them in the real world, and hope the edge cases are forgiving. They usually aren’t. For robots, vehicles, and embodied AI, one safety violation can be catastrophic — and yet most learning‑based methods still treat safety as an expectation, a probability, or worse, a regularization term. ...

December 12, 2025 · 4 min · Zelina
Cover image

Learning by X-ray: When Surgical Robots Teach Themselves to See in Shadows

Opening — Why this matters now Surgical robotics has long promised precision beyond human hands. Yet, the real constraint has never been mechanics — it’s perception. In high-stakes fields like spinal surgery, machines can move with submillimeter accuracy, but they can’t yet see through bone. That’s what makes the Johns Hopkins team’s new study, Investigating Robot Control Policy Learning for Autonomous X-ray-guided Spine Procedures, quietly radical. It explores whether imitation learning — the same family of algorithms used in self-driving cars and dexterous robotic arms — can enable a robot to navigate the human spine using only X-ray vision. ...

November 9, 2025 · 4 min · Zelina
Cover image

When Drones Think Too Much: Defining Cognition Envelopes for Bounded AI Reasoning

Why this matters now As AI systems move from chatbots to control towers, the stakes of their hallucinations have escalated. Large Language Models (LLMs) and Vision-Language Models (VLMs) now make—or at least recommend—decisions in physical space: navigating drones, scheduling robots, even allocating emergency response assets. But when such models “reason” incorrectly, the consequences extend beyond embarrassment—they can endanger lives. Notre Dame’s latest research introduces the concept of a Cognition Envelope, a new class of reasoning guardrail that constrains how foundational models reach and justify their decisions. Unlike traditional safety envelopes that keep drones within physical limits (altitude, velocity, geofence) or meta-cognition that lets an LLM self-critique, cognition envelopes work from outside the reasoning process. They independently evaluate whether a model’s plan makes sense, given real-world constraints and evidence. ...

November 5, 2025 · 4 min · Zelina
Cover image

Deep Thinking, Dynamic Acting: How DeepAgent Redefines General Reasoning

In the fast-evolving landscape of agentic AI, one critical limitation persists: most frameworks can think or act, but rarely both in a fluid, self-directed manner. They follow rigid ReAct-like loops—plan, call, observe—resembling a robot that obeys instructions without ever truly reflecting on its strategy. The recent paper “DeepAgent: A General Reasoning Agent with Scalable Toolsets” from Renmin University and Xiaohongshu proposes an ambitious leap beyond this boundary. It envisions an agent that thinks deeply, acts freely, and remembers wisely. ...

October 31, 2025 · 4 min · Zelina
Cover image

Forkcast: How Pro2Guard Predicts and Prevents LLM Agent Failures

If your AI agent is putting a metal fork in the microwave, would you rather stop it after the sparks fly—or before? That’s the question Pro2Guard was designed to answer. In a world where Large Language Model (LLM) agents are increasingly deployed in safety-critical domains—from household robots to autonomous vehicles—most existing safety frameworks still behave like overly cautious chaperones: reacting only when danger is about to occur, or worse, when it already has. This reactive posture, embodied in rule-based systems like AgentSpec, is too little, too late in many real-world scenarios. ...

August 4, 2025 · 4 min · Zelina
Cover image

Good AI Goes Rogue: Why Intelligent Disobedience May Be the Key to Trustworthy Teammates

Good AI Goes Rogue: Why Intelligent Disobedience May Be the Key to Trustworthy Teammates We expect artificial intelligence to follow orders. But what if following orders isn’t always the right thing to do? In a world increasingly filled with AI teammates—chatbots, robots, digital assistants—the most helpful agents may not be the most obedient. A new paper by Reuth Mirsky argues for a shift in how we design collaborative AI: rather than blind obedience, we should build in the capacity for intelligent disobedience. ...

June 30, 2025 · 3 min · Zelina