Cover image

FadeMem: When AI Learns to Forget on Purpose

Opening — Why this matters now The race to build smarter AI agents has mostly followed one instinct: remember more. Bigger context windows. Larger vector stores. Ever-growing retrieval pipelines. Yet as agents move from demos to long-running systems—handling days or weeks of interaction—this instinct is starting to crack. More memory does not automatically mean better reasoning. In practice, it often means clutter, contradictions, and degraded performance. Humans solved this problem long ago, not by remembering everything, but by forgetting strategically. ...

February 1, 2026 · 4 min · Zelina
Cover image

MemCtrl: Teaching Small Models What *Not* to Remember

Opening — Why this matters now Embodied AI is hitting a very human bottleneck: memory. Not storage capacity, not retrieval speed—but judgment. Modern multimodal large language models (MLLMs) can see, reason, and act, yet when deployed as embodied agents they tend to remember too much, too indiscriminately. Every frame, every reflection, every redundant angle piles into context until the agent drowns in its own experience. ...

January 31, 2026 · 4 min · Zelina
Cover image

When Interfaces Guess Back: Implicit Intent Is the New GUI Bottleneck

Opening — Why this matters now GUI agents are getting faster, more multimodal, and increasingly competent at clicking the right buttons. Yet in real life, users don’t talk to software like prompt engineers. They omit details, rely on habit, and expect the system to remember. The uncomfortable truth is this: most modern GUI agents are optimized for obedience, not understanding. ...

January 15, 2026 · 4 min · Zelina
Cover image

EverMemOS: When Memory Stops Being a Junk Drawer

Opening — Why this matters now Long-context models were supposed to solve memory. They didn’t. Despite six-figure token windows, modern LLM agents still forget, contradict themselves, and—worse—remember the wrong things at the wrong time. The failure mode is no longer missing information. It is unstructured accumulation. We’ve built agents that can recall fragments indefinitely but cannot reason over them coherently. ...

January 6, 2026 · 3 min · Zelina
Cover image

Echoes, Not Amnesia: Teaching GUI Agents to Remember What Worked

Opening — Why this matters now GUI agents are finally competent enough to click buttons without embarrassing themselves. And yet, they suffer from a strangely human flaw: they forget everything they just learned. Each task is treated as a clean slate. Every mistake is patiently re‑made. Every success is quietly discarded. In a world obsessed with scaling models, this paper asks a simpler, sharper question: what if agents could remember? ...

December 23, 2025 · 3 min · Zelina
Cover image

Memory Over Models: Letting Agents Grow Up Without Retraining

Opening — Why this matters now We are reaching the awkward teenage years of AI agents. LLMs can already do things: book hotels, navigate apps, coordinate workflows. But once deployed, most agents are frozen in time. Improving them usually means retraining or fine-tuning models—slow, expensive, and deeply incompatible with mobile and edge environments. The paper “Beyond Training: Enabling Self-Evolution of Agents with MOBIMEM” takes a blunt stance: continual agent improvement should not depend on continual model training. Instead, evolution should happen where operating systems have always handled adaptation best—memory. ...

December 20, 2025 · 4 min · Zelina
Cover image

The Memory Illusion: Why AI Still Forgets Who It Is

Opening — Why this matters now Every AI company wants its assistant to feel personal. Yet every conversation starts from zero. Your favorite chatbot may recall facts, summarize documents, even mimic a tone — but beneath the fluent words, it suffers from a peculiar amnesia. It remembers nothing unless reminded, apologizes often, and contradicts itself with unsettling confidence. The question emerging from Stefano Natangelo’s “Narrative Continuity Test (NCT)” is both philosophical and practical: Can an AI remain the same someone across time? ...

November 3, 2025 · 4 min · Zelina
Cover image

Layers of Thought: How Hierarchical Memory Supercharges LLM Agent Reasoning

Most LLM agents today think in flat space. When you ask a long-term assistant a question, it either scrolls endlessly through past turns or scours an undifferentiated soup of semantic vectors to recall something relevant. This works—for now. But as tasks get longer, more nuanced, and more personal, this memory model crumbles under its own weight. A new paper proposes an elegant solution: H-MEM, or Hierarchical Memory. Instead of treating memory as one big pile of stuff, H-MEM organizes past knowledge into four semantically structured layers: Domain, Category, Memory Trace, and Episode. It’s the difference between a junk drawer and a filing cabinet. ...

August 1, 2025 · 3 min · Zelina