When AI Stops Pretending: The Rise of Role-Playing Agents
Opening — Why this matters now Large language models have learned how to talk. That part is mostly solved. The harder problem—quietly surfacing beneath the hype—is whether they can stay in character. The explosion of role‑playing agents (RPLAs) is not driven by novelty alone. It reflects a structural shift in how humans want to interact with AI: not as tools, but as persistent entities with memory, motivation, and recognizable behavior. When an AI tutor forgets who it is, or a game NPC contradicts its own values mid‑conversation, immersion collapses instantly. The paper reviewed here treats that collapse as a technical failure, not a UX quirk—and that framing is overdue. fileciteturn0file0 ...