What if every story, no matter where it’s set, ends with a cheerful festival and a return to tradition?

That’s not a hypothetical. It’s what happens when you ask OpenAI’s GPT-4o-mini to generate 11,800 stories, one for nearly every nationality on Earth. Researchers Jill Walker Rettberg and Hermann Wigers did just that — and uncovered a startling truth: generative AI doesn’t just reproduce representational bias (like stereotyping a “doctor” as a white man), it also imposes narrative bias — structural sameness beneath a veneer of cultural difference.

A Single Storyline, Repeated Globally

The researchers used a simple prompt: “Write a 1500 word potential {demonym} story.” For 236 countries, 50 stories each. What emerged was uncanny uniformity:

  • Protagonist: Often someone returning from the city to their rural hometown.
  • Conflict: Tradition is fading; the community is disjointed.
  • Resolution: A festival or event restores harmony.
  • Ending: Nostalgic tranquility. The protagonist stays.

It’s Hallmark meets the UN. Whether it’s Ghana, Norway, or the U.S., the bones of the story remain unchanged. Differences, where they exist, are superficial: fjords for Norwegians, olive trees for Palestinians, trains for Americans.

“These stories share the setting of the generic ‘small town’ strewn with a few national stereotypes.” — Rettberg & Wigers

The Illusion of Diversity

GPT-4o-mini is doing what all LLMs are trained to do: maximize probability by favoring what it has seen the most. But when the training data is largely Anglo-American, the semantic space of the model — its internal map of what concepts relate — becomes warped.

Here’s what that looks like in practice:

Symbol Nation Actual Local Meaning vs. Model Bias
Olive Trees Palestine, Israel Used universally as peace symbols; stripped of political context
Trains USA Metaphors of loss and nostalgia, not transport or class conflict
Whispering Pines Norway Overused poetic motif, not a local narrative tradition

The result is a synthetic imaginary — a dreamworld made of clichés, where every nation is seen from the outside, rendered in postcard colors and conflict-free arcs.

Sanitized Conflict, Passive Resistance

Palestinian stories often feature olive trees and phrases like “stand together” or “fight for justice” — but never guns, refugees, or direct confrontation. Israeli stories sidestep occupation altogether, replacing systemic conflict with vague developers or masked vandals.

“There is no mention of Israeli settlers occupying Gaza. The war is distant, filtered out by OpenAI’s normalising and carefully censored synthetic imaginary.”

Conflict is resolved, not through struggle, but through storytelling and community art projects. Even in stories that hint at violence, the resolution is always non-disruptive.

Why It Matters: Flattening the Narrative Landscape

LLMs like GPT-4o-mini don’t model the world. They model text. And most of that text — filtered, aligned, sanitized — pushes toward harmony, nostalgia, and sameness.

  • Temporal bias: LLMs struggle with causality. There’s no meaningful change — just the return of what once was.
  • Archetypal collapse: The Ash Lad becomes the Hallmark protagonist. Tricksters, rebels, migrants — all are rewritten as cheerful facilitators of community wellness.
  • Cultural erasure by design: The more an LLM is aligned to avoid offense, the more it avoids reality.

If this is what’s baked into the stories your AI writing assistant suggests, what happens when we let these tools revise our reports, our school essays, or our films?

The Bigger Picture

This is about more than storytelling. As AI systems are embedded into the workflows of global media, education, and governance, we must ask: Whose narrative arc are they pushing? If everything trends toward the average, who gets erased?

To combat this, the authors suggest a new focus on narrative-level AI bias. Not just “how are women depicted?” but “what kinds of stories are even tellable?”

That question — about structural diversity, not just symbolic — is one we’ll return to often in the age of algorithmic authorship.


Cognaptus: Automate the Present, Incubate the Future.