If we measure the impact of AI by how much easier it makes our lives, ChatGPT is a clear winner. But if we start asking what it’s doing to our minds, the answers get more uncomfortable.

A new study by Georgios P. Georgiou titled “ChatGPT produces more ‘lazy’ thinkers” provides empirical evidence that using ChatGPT for writing tasks significantly reduces students’ cognitive engagement. While this aligns with common intuition—many of us have sensed how AI flattens the peaks of our mental effort—the paper goes a step further. It puts numbers to the problem, and the numbers are hard to ignore.

The Experiment: Writing with vs. without ChatGPT

The study randomly assigned 40 university students to two groups:

  • Control group: wrote a 300-word argumentative essay without any assistance.
  • ChatGPT group: wrote the same essay with access to ChatGPT 3.5.

After completing the task, both groups filled out a cognitive engagement questionnaire (CES-AI), which measured:

  1. Deep understanding
  2. Mental effort
  3. Sustained attention
  4. Strategic thinking

Each item was scored on a 1–5 Likert scale. The results were striking:

Group Avg. CES-AI Score
Control (no ChatGPT) 4.19
ChatGPT-assisted 2.95

A one-way ANOVA confirmed the difference was statistically significant (p < 0.001).

Offloading Effort, Offloading Ownership?

These results build on a growing concern that LLMs, while making tasks more efficient, also reduce the cognitive ownership students take over their work. The phenomenon is described as cognitive offloading—a tendency to delegate not just information retrieval (like Googling) but now also synthesis, reasoning, and even decision-making to AI systems.

The problem isn’t just that students used ChatGPT—it’s how they used it. While some might expect AI to act as a cognitive scaffold, encouraging reflection or expanding argument space, it often ends up doing the thinking for us. Students in the ChatGPT group reported less exploration of alternative perspectives and less focus. In essence, the AI became not a thinking partner, but a thinking substitute.

What This Means Beyond the Classroom

The implications stretch far beyond education:

  • In enterprise contexts, we increasingly deploy LLM copilots in coding, writing, and research. If these tools similarly dull our cognitive sharpness, we may be automating not just workflows but mental disengagement.
  • For product designers, there’s a design opportunity: how can we create LLM interfaces that encourage metacognition instead of bypassing it?
  • In policymaking, educational institutions and corporate trainers may need to rethink how AI is integrated—focusing not just on output quality but also on process quality.

Rethinking What “Smart” Means

There’s a seductive logic in believing that more AI means more intelligence. But Georgiou’s study reminds us: smart tools can make us dumber if we stop thinking alongside them.

That doesn’t mean banning AI. It means redesigning the way we use it. Rather than treat ChatGPT as an oracle, we might instead teach students (and employees) to treat it as a provoker—a sparring partner that forces them to challenge, not just copy.

What this study brings to the table is not just a critique, but a research-backed call to re-engage. If we want human intelligence to evolve with AI, we need to start asking harder questions—about design, pedagogy, and the psychological price of convenience.


Cognaptus: Automate the Present, Incubate the Future.