
Mind Games: How LLMs Subtly Rewire Human Judgment
“The most dangerous biases are not the ones we start with, but the ones we adopt unknowingly.” Large language models (LLMs) like GPT and LLaMA increasingly function as our co-pilots—summarizing reviews, answering questions, and fact-checking news. But a new study from UC San Diego warns: these models may not just be helping us think—they may also be nudging us how to think. The paper, titled “How Much Content Do LLMs Generate That Induces Cognitive Bias in Users?”, dives into the subtle but significant ways in which LLM-generated outputs reframe, reorder, or even fabricate information—leading users to adopt distorted views without realizing it. This isn’t just about factual correctness. It’s about cognitive distortion: the framing, filtering, and fictionalizing that skews human judgment. ...