Cover image

From Retry to Recovery: Teaching AI Agents to Learn from Their Own Mistakes

Opening — Why this matters now Everyone wants autonomous agents. Few seem willing to admit that most of them are still glorified retry machines. In production systems—from coding copilots to web automation agents—the dominant strategy is embarrassingly simple: try, fail, try again, and hope that one trajectory sticks. This works, but only if you can afford the latency, compute cost, and engineering complexity of massive sampling. ...

March 18, 2026 · 5 min · Zelina
Cover image

Affective Inertia: Teaching LLM Agents to Remember Who They Are

Opening — Why this matters now LLM agents are getting longer memories, better tools, and more elaborate planning stacks—yet they still suffer from a strangely human flaw: emotional whiplash. An agent that sounds empathetic at turn 5 can become oddly cold at turn 7, then conciliatory again by turn 9. For applications that rely on trust, continuity, or persuasion—mental health tools, tutors, social robots—this instability is not a cosmetic issue. It’s a structural one. ...

January 23, 2026 · 3 min · Zelina