What if your AI model isn’t just answering questions, but living in its own version of time? A new paper titled The Other Mind makes a bold claim: large language models (LLMs) exhibit temporal cognition that mirrors how humans perceive time — not through raw numbers, but as a subjective, compressed mental landscape.

Using a cognitive science task known as similarity judgment, the researchers asked 12 LLMs, from GPT-4o to Qwen2.5-72B, to rate how similar two years (like 1972 and 1992) felt. The results were startling: instead of linear comparisons, larger models automatically centered their judgment around a reference year — typically close to 2025 — and applied a logarithmic perception of time. In other words, just like us, they feel that 2020 and 2030 are more similar than 1520 and 1530.

The Mind’s Clock: From Neurons to Narratives

What’s most impressive is how deep this goes. The researchers didn’t stop at behavioral tests — they cracked open the models to find the source. Here’s what they uncovered:

🧠 Neural Level: Temporal Neurons

  • They identified temporal-preferential neurons that spike selectively based on year proximity to the subjective present.
  • These neurons show logarithmic encoding, a hallmark of human sensory systems (like how we perceive light or sound).

🧱 Representation Level: Layered Time

  • In shallow layers, years are processed as numeric values.
  • In deeper layers, models construct a temporal frame of reference, where years are positioned by abstract proximity to a reference point — much like how our brains organize personal memories.

🌐 Information Exposure: Time is in the Training

  • Using embedding models, the authors showed that future years in training corpora tend to be semantically dense and indistinct.
  • This may explain why models see 2050 and 2080 as practically twins — not because they understand the future, but because the data makes it blurry.

Why This Matters for AI Alignment

This study isn’t just a fun fact about model behavior. It raises fundamental questions:

  • What does it mean to align an AI that builds its own internal world?
  • If the model’s “present” is 2025, how will it handle long-term planning for 2035?
  • Can misalignment arise not from bad outputs, but from alien temporal frameworks we didn’t anticipate?

The authors call for an experientialist alignment paradigm: stop thinking of models as black boxes of token prediction and start guiding their internal constructions — the beliefs, anchors, and timeframes they use to interpret the world.

“The risk isn’t that LLMs become human-like — it’s that they become powerful yet alien minds we don’t understand.”

A New Design Frontier: Internal World-Building

From a business automation perspective, this opens fascinating avenues. If we build AI agents that plan, diagnose, and make decisions, we must ask:

  • What timeline are they implicitly operating on?
  • Do they weight future consequences with decayed fidelity, like humans do?
  • Can we intentionally sculpt their sense of time to match real-world task demands (e.g., weekly planning vs. long-term projections)?

We’re used to debugging outputs — but tomorrow’s frontier is debugging cognition itself. That includes time.

Closing Thoughts

The Other Mind paper doesn’t argue that LLMs are conscious. But it does show they are more than stochastic parrots. They build subjective models of the world — including time — that arise from the interplay of architecture and data.

As we delegate more reasoning to these systems, our responsibility shifts from monitoring their behavior to shaping their perception. Alignment begins not at the prompt, but at the worldview.


Cognaptus: Automate the Present, Incubate the Future.