Cover image

Ctrl+Z Is Not a Strategy: When LLM Self-Correction Actually Works

Opening — Why this matters now Agentic AI systems are currently being sold with a suspiciously comforting ritual: generate an answer, ask the same model to reflect, then ask it to improve the answer. Repeat until the dashboard looks busy. In demos, this feels intelligent. In production, it may simply be a very expensive way to turn correct answers into wrong ones. ...

April 30, 2026 · 12 min · Zelina
Cover image

Teaching Safety to Machines: How Inverse Constraint Learning Reimagines Control Barrier Functions

Autonomous systems—from self-driving cars to aerial drones—are bound by one inescapable demand: safety. But encoding safety directly into algorithms is harder than it sounds. We can write explicit constraints (“don’t crash,” “stay upright”), yet the boundary between safe and unsafe states often defies simple equations. The recent paper Learning Neural Control Barrier Functions from Expert Demonstrations using Inverse Constraint Learning (Yang & Sibai, 2025) offers a different path. It suggests that machines can learn what safety looks like—not from rigid formulas, but from watching experts. ...

October 31, 2025 · 4 min · Zelina