Cover image

Infinite Tasks, Finite Minds: Why Agents Keep Forgetting—and How InfiAgent Cheats Time

Opening — Why this matters now Everyone wants an autonomous agent that can just keep going. Write a literature review. Audit 80 papers. Run an open-ended research project for days. In theory, large language models (LLMs) are perfect for this. In practice, they quietly collapse under their own memory. The problem isn’t model intelligence. It’s state. ...

January 7, 2026 · 4 min · Zelina
Cover image

EverMemOS: When Memory Stops Being a Junk Drawer

Opening — Why this matters now Long-context models were supposed to solve memory. They didn’t. Despite six-figure token windows, modern LLM agents still forget, contradict themselves, and—worse—remember the wrong things at the wrong time. The failure mode is no longer missing information. It is unstructured accumulation. We’ve built agents that can recall fragments indefinitely but cannot reason over them coherently. ...

January 6, 2026 · 3 min · Zelina
Cover image

Forget Me Not: How IterResearch Rebuilt Long-Horizon Thinking for AI Agents

Opening — Why this matters now The AI world has become obsessed with “long-horizon” reasoning—the ability for agents to sustain coherent thought over hundreds or even thousands of interactions. Yet most large language model (LLM) agents, despite their size, collapse under their own memory. The context window fills, noise piles up, and coherence suffocates. Alibaba’s IterResearch tackles this problem not by extending memory—but by redesigning it. ...

November 11, 2025 · 4 min · Zelina