Cover image

Echo Chambers or Stubborn Minds? Simulating Social Influence with LLM Agents

Large language models aren’t just prompt-completion machines anymore. In controlled simulations, they can behave like people in a group discussion: yielding to peer pressure, sticking to their beliefs, or becoming more extreme over time. But not all LLMs are socially equal. A recent paper titled “Towards Simulating Social Influence Dynamics with LLM-based Multi-agents” explores how different LLMs behave in a forum-style discussion, capturing three phenomena familiar to any political science researcher or Reddit moderator: conformity, group polarization, and fragmentation. The twist? These aren’t real people. They’re fully scripted LLM agents with fixed personas, engaged in asynchronous multi-round debates. ...

July 31, 2025 · 3 min · Zelina
Cover image

Memory Over Matter: How MemAgent Redefines Long-Context Reasoning with Reinforcement Learning

Handling long documents has always been a source of frustration for large language models (LLMs). From brittle extrapolation hacks to obscure compression tricks, the field has often settled for awkward compromises. But the paper MemAgent: Reshaping Long-Context LLM with Multi-Conv RL-based Memory Agent boldly reframes the problem: what if LLMs could read like humans—absorbing information chunk by chunk, jotting down useful notes, and focusing on what really matters? At the heart of MemAgent is a surprisingly elegant idea: treat memory not as an architectural afterthought but as an agent policy to be trained. Instead of trying to scale attention across millions of tokens, MemAgent introduces a reinforcement-learning-shaped overwriteable memory that allows an LLM to iteratively read arbitrarily long documents in segments. It learns—through reward signals—what to keep and what to discard. ...

July 4, 2025 · 4 min · Zelina