Cover image

The Memory Illusion: Why AI Still Forgets Who It Is

Opening — Why this matters now Every AI company wants its assistant to feel personal. Yet every conversation starts from zero. Your favorite chatbot may recall facts, summarize documents, even mimic a tone — but beneath the fluent words, it suffers from a peculiar amnesia. It remembers nothing unless reminded, apologizes often, and contradicts itself with unsettling confidence. The question emerging from Stefano Natangelo’s “Narrative Continuity Test (NCT)” is both philosophical and practical: Can an AI remain the same someone across time? ...

November 3, 2025 · 4 min · Zelina
Cover image

Layers of Thought: How Hierarchical Memory Supercharges LLM Agent Reasoning

Most LLM agents today think in flat space. When you ask a long-term assistant a question, it either scrolls endlessly through past turns or scours an undifferentiated soup of semantic vectors to recall something relevant. This works—for now. But as tasks get longer, more nuanced, and more personal, this memory model crumbles under its own weight. A new paper proposes an elegant solution: H-MEM, or Hierarchical Memory. Instead of treating memory as one big pile of stuff, H-MEM organizes past knowledge into four semantically structured layers: Domain, Category, Memory Trace, and Episode. It’s the difference between a junk drawer and a filing cabinet. ...

August 1, 2025 · 3 min · Zelina