Cover image

Echo Chambers or Stubborn Minds? Simulating Social Influence with LLM Agents

Large language models aren’t just prompt-completion machines anymore. In controlled simulations, they can behave like people in a group discussion: yielding to peer pressure, sticking to their beliefs, or becoming more extreme over time. But not all LLMs are socially equal. A recent paper titled “Towards Simulating Social Influence Dynamics with LLM-based Multi-agents” explores how different LLMs behave in a forum-style discussion, capturing three phenomena familiar to any political science researcher or Reddit moderator: conformity, group polarization, and fragmentation. The twist? These aren’t real people. They’re fully scripted LLM agents with fixed personas, engaged in asynchronous multi-round debates. ...

July 31, 2025 · 3 min · Zelina
Cover image

The Sims Get Smart? Why LLM-Driven Social Simulations Need a Reality Check

Social simulations are entering their uncanny valley. Fueled by generative agents powered by Large Language Models (LLMs), recent frameworks like Smallville, AgentSociety, and SocioVerse simulate thousands of lifelike agents forming friendships, spreading rumors, and planning parties. But do these simulations reflect real social processes — or merely replay the statistical shadows of the internet? When Simulacra Speak Fluently LLMs have demonstrated striking abilities to mimic human behaviors. GPT-4 has passed Theory-of-Mind (ToM) tests at levels comparable to 6–7 year-olds. In narrative contexts, it can detect sarcasm, understand indirect requests, and generate empathetic replies. But all of this arises not from embodied cognition or real-world goals — it’s just next-token prediction trained on massive corpora. ...

July 28, 2025 · 4 min · Zelina