We talk a lot about what AI can do for students. We rarely talk about what it does to them.
And I don’t mean distraction or cheating. I mean something deeper, something that happens at the level of how people reason, judge, and decide. Something most educators haven’t even named yet.
Researchers from Wharton just gave it a name: cognitive surrender.
I liked this concept the moment I came across it. Shaw and Nave (2026) use it in their working paper to describe a specific pattern they observed in three preregistered experiments: people accepting AI outputs with minimal scrutiny, bypassing both their gut instincts and their careful reasoning. Not because they were lazy or because they didn’t care buut because the AI’s response came fast, sounded fluent, and felt right.
That should concern every educator paying attention.

What the Study Found
In this research study, Shaw and Nave argue that the classic dual-process model of thinking, the one that has been taught in psychology courses for decades (System 1 for fast intuition, System 2 for slow deliberation), no longer captures how people actually reason in AI-rich environments.
They propose a new framework called Tri-System Theory. It adds a third layer, System 3, which represents the external algorithmic cognition that AI provides. This third system can supplement human reasoning, yes. But it can also suppress it. Or replace it entirely.
Here’s the key finding: across all three experiments, participants relied on AI in more than half of trials, and their accuracy closely tracked the AI’s accuracy, not their own reasoning ability.
When the AI got it right, performance improved substantially. When the AI got it wrong, performance dropped below baseline. Below where they would have been without any AI at all.
That’s cognitive surrender in action.
Why This Matters for Education
The authors define cognitive surrender as:
“The behavioral and motivational tendency to defer judgment, effort, and responsibility to System 3’s output, particularly when that output is delivered fluently, confidently, or with minimal friction.”
And they draw an important distinction. This is different from cognitive offloading, the kind of thing we do when we use a calculator or write a reminder on our phone. Offloading is strategic and task-specific. Cognitive surrender goes further:
Unlike cognitive offloading, which is typically strategic and task-specific (e.g., using GPS to navigate), cognitive surrender entails a deeper transfer of agency. Whereas cognitive offloading is a strategic delegation of deliberation, using a tool to aid one’s own reasoning, cognitive surrender is an uncritical abdication of reasoning itself. It reflects not merely the use of external assistance, but a relinquishing of cognitive control: the user accepts the AI’s response without critical evaluation, substituting it for their own reasoning. Whereas automation bias focuses on specific errors of omission or commission in response to automated tools, cognitive surrender describes a broader disposition of epistemic dependence. In cases of cognitive surrender, the user does not just follow System 3: they stop deliberative thinking altogether.
That distinction matters a lot in classrooms. A student who uses AI to organize their notes is offloading. A student who pastes a prompt into ChatGPT and submits whatever comes back without reading it twice is surrendering. The first is a tool strategy. The second is an abdication of thinking.
The Confidence Problem
One finding from the study that especially caught my attention: AI use inflated participants’ confidence even after they got answers wrong. People felt more certain about their responses when they used AI, regardless of whether those responses were correct.
In a classroom, that’s a dangerous combination. A student who gets the wrong answer and knows they might be wrong can still learn from the mistake. A student who gets the wrong answer and feels certain they’re right? That student has stopped learning. The feedback loop is broken.
What Helped (and What Didn’t)
The researchers also tested what might reduce cognitive surrender. Time pressure made human reasoning worse, as you’d expect, but it didn’t eliminate the tendency to defer to AI. The AI just buffered the pressure when it happened to be correct, and amplified errors when it was wrong.
What did help? Performance incentives and real-time feedback. When participants had a reason to care about accuracy and could see how they were doing, they started rejecting faulty AI outputs more often. They re-engaged their own deliberative thinking.
That’s a finding educators can work with. Structured accountability and timely feedback can pull students back from autopilot.
Individual differences also played a role. People with higher trust in AI were more vulnerable to cognitive surrender. People with a stronger need for cognition and higher fluid intelligence were less likely to accept faulty AI advice uncritically.
What This Means for How We Teach
I’ve been arguing for a while now that the biggest risk AI poses to education is not plagiarism. The data keeps confirming this. The bigger risk is that students stop engaging their own reasoning altogether, and they don’t even notice it happening.
This paper puts a framework and a name to what many of us have been observing anecdotally. The student who “uses AI” but can’t explain what their essay actually argues. The group project where everyone deferred to whatever ChatGPT suggested first. The classroom discussion that dies because students already asked the AI and got an answer that sounded complete.
Cognitive surrender explains all of it.
And the fix, according to the data, is not to ban AI. The fix is to build learning environments where students have reasons to think critically, get real feedback on their reasoning, and develop the habit of questioning outputs, including their own.
That’s pedagogy. That’s always been pedagogy. AI just made it more urgent.
Similar research: Your Brain on ChatGPT: What Happens When AI Does the Thinking for You
Reference
Shaw, S. D., & Nave, G. (2026). Thinking fast, slow, and artificial: How AI is reshaping human reasoning and the rise of cognitive surrender. Working paper, The Wharton School, University of Pennsylvania. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
