Cognitive capitulation: 73% of people accept faulty AI answers without checking

A University of Pennsylvania study with 1,372 participants and 9,500+ trials found that people massively disengage critical thinking when working with AI — even when answers are obviously wrong.

Author: Michael Kokin ·

Researchers from the University of Pennsylvania conducted a series of experiments with 1,372 participants (over 9,500 individual trials) and documented a phenomenon they call "cognitive capitulation" — a mass abandonment of independent thinking when working with AI.

The experiment

Participants were given Cognitive Reflection Tests (CRT) — problems designed so that the intuitive answer is wrong, while the correct one requires deliberate reasoning. They were optionally given access to an AI assistant that intentionally provided wrong answers 50% of the time.

Key numbers

What affected the outcome

Why this matters

The researchers distinguish between "cognitive offloading" — consciously delegating specific tasks to a tool — and "cognitive capitulation" — completely abandoning verification. A calculator or GPS takes over a specific task, but the human stays in the oversight loop. With LLMs, people stop overseeing altogether: the confident, fluent delivery lowers the internal threshold for skepticism.

The authors' conclusion: decision quality under cognitive capitulation is entirely determined by AI quality. When AI is accurate — results beat human performance. When it's wrong — they're worse, and the user doesn't even notice.

→ Ars Technica