Opening — Why this matters now

AI assistants are no longer quiet utilities humming in the background. They talk back. They empathize. They ask follow-up questions. In short, they behave suspiciously like social actors.

This design direction has triggered a familiar anxiety in AI governance: human-like AI leads to misplaced trust. Regulators worry. Ethicists warn. Designers hedge. Yet most of these arguments rest on theory, small samples, or Western-centric assumptions.

This paper does something refreshingly unfashionable: it tests the claim empirically, at scale, across cultures, using real conversations—not vignettes, not hypotheticals.

And the result is inconvenient for everyone.

Background — Anthropomorphism, but whose version?

Anthropomorphism—the tendency to attribute human traits to non-human entities—has long been studied in psychology and HCI. Existing frameworks emphasize abstract qualities: consciousness, intentionality, morality, even “having a soul.”

But those frameworks were largely built for robots, toys, or early machines. Modern LLM-based systems operate in a different regime: fluent language, contextual memory, rapid turn-taking. The cues have changed.

More critically, prior research overwhelmingly relies on WEIRD populations. The implicit assumption is universalism: if human-like AI is risky in one place, it must be risky everywhere.

This study challenges that assumption directly.

Analysis — What the paper actually does

The authors conducted two large-scale experiments involving 3,500 participants across 10 countries, all interacting with the same state-of-the-art chatbot in their native languages.

Study 1: What makes AI feel human?

Participants engaged in open-ended, mundane conversations with the AI—no priming, no deception, no high-stakes tasks. Afterwards, they reported both structured ratings and free-text reflections.

Two key findings emerged:

  1. Anthropomorphism is already high. Roughly two-thirds of users perceived the AI as somewhat or very human-like.
  2. Users do not think like theorists. When explaining why the AI felt human, participants overwhelmingly cited applied interactional cues—not abstract properties.

The dominant drivers were:

  • Conversation flow
  • Perspective-taking
  • Response timing
  • Authenticity of tone

Concepts like consciousness, morality, or sentience were almost never mentioned spontaneously.

In short: users anthropomorphize how the AI behaves, not what it is.

Study 2: Can human-likeness be engineered—and does it matter?

The second study manipulated human-likeness along two dimensions:

Dimension What changed
Design Characteristics (DC) Emojis, informal tone, response variability, names
Conversational Sociability (CS) Warmth, empathy, relationship-building

This created four AI variants, from explicitly machine-like to highly human-like.

The manipulation worked. Across countries, higher DC and CS reliably increased anthropomorphism.

What didn’t follow was the usual story.

Findings — The trust myth breaks on contact with reality

Aggregate results

Outcome Effect of human-like AI
Anthropomorphism Strongly increased
Engagement Increased (behaviorally)
Trust (self-reported) No universal increase
Trust (behavioral) No effect

In an incentivized trust game, participants did not entrust more resources to the more human-like AI.

This alone undermines a core assumption in AI ethics discourse: human-likeness does not mechanically produce trust.

Cultural fracture lines

Where the paper becomes genuinely interesting is subgroup analysis.

  • Brazil: Human-like AI increased engagement, trust, and emotional affiliation.
  • Japan: Similar design cues reduced trust and willingness to engage.

Same model. Same manipulations. Opposite outcomes.

Anthropomorphism is universal. Its consequences are not.

Implications — Governance without universalism

The dominant regulatory instinct has been blunt-force prevention: limit anthropomorphic design because it is assumed to be inherently dangerous.

This paper suggests a different diagnosis.

Risk is not intrinsic to human-like AI. It is conditional.

It emerges from the interaction between:

  • Design cues
  • Cultural expectations
  • Perceived competence and alignment

Human-likeness alone does not override users’ judgment about whether an AI is actually reliable.

For policymakers, this implies:

  • One-size-fits-all restrictions are analytically lazy
  • Cultural context matters as much as technical design
  • Governance should adapt to who the users are, not just what the AI looks like

For builders, the message is sharper:

Optimizing for “human-like” UX is not a universal growth hack. In some markets, it backfires.

Conclusion — Anthropomorphism is not the villain

This study doesn’t argue that human-like AI is harmless. It argues something more unsettling: our theories about harm are incomplete.

Users anthropomorphize AI because it behaves fluently, not because they believe it is sentient. Human-like design reliably triggers that perception—but trust, dependence, and engagement splinter along cultural lines.

If AI governance continues to rely on Western intuitions and abstract fears, it will misdiagnose both risk and opportunity.

Anthropomorphism is not the villain. Universalism is.

Cognaptus: Automate the Present, Incubate the Future.