Cover image

Mask, Don’t Muse: When Simple Memory Beats Fancy Summaries

The short of it A new study on SWE-agent working over SWE-bench Verified finds that masking old observations (keeping recent turns verbatim, replacing older tool outputs with a placeholder) often matches or slightly beats prompt-based LLM summarization—and at roughly half the cost. The paper also surfaces a subtle failure mode: summaries can elongate trajectories, encouraging agents to “keep going” when they should stop, diluting efficiency and, at times, performance. Why this matters for builders Most production SE agents (debuggers, PR autoresponders, test fixers) rack up spend on two things: tokens and time. Tool logs dominate both. In practice, observation tokens comprise the bulk of an agent’s turn, so trimming them intelligently is the highest‑leverage knob. The results show you might not need fancy, model‑authored summaries; a rolling “mask” window can land on the most efficient frontier—equal or better solve rate, far lower cost—across Qwen3‑Coder 480B, Qwen3‑32B (thinking/non‑thinking), and Gemini 2.5 Flash (thinking/non‑thinking). ...

September 1, 2025 · 4 min · Zelina