The Great Cognitive Heist: What We're Really Losing
Part 3 of 5: The AI Learning Revolution Series
There's a scene in the movie "Inception" where the characters realize they've been living in a dream for so long that they can't remember what reality feels like. They've become so accustomed to the artificial world that the real one seems foreign and difficult to navigate.
I keep thinking about that scene when I read the research on AI's impact on human cognition. We're not just talking about people becoming a little lazy or dependent on technology. We're talking about something much more profound: the systematic theft of human cognitive capabilities, happening so gradually that most people don't even realize it's occurring.
The hollow feeling I described in the first post? The brain changes documented in the second? Those are just the symptoms. Today, I want to talk about what we're actually losing, and why it matters more than you might think.
The Memory Heist: When Your Brain Stops Recording
Let's start with something fundamental: memory formation. Not the kind of memory where you try to remember where you put your keys, but the deep, integrative memory that transforms experiences into knowledge and understanding.
The MIT study revealed that participants using ChatGPT showed weaker alpha and theta brain waves, the very brain patterns associated with deep memory processes and information integration [1]. This isn't just about forgetting facts; it's about the fundamental process by which experiences become learning.
Think about Sarah from our first post, who couldn't remember what she had "written" just hours after submitting her AI-assisted essay. This isn't simply a case of poor attention or academic dishonesty. It's evidence of a breakdown in the cognitive processes that transform experience into knowledge.
When AI tools do the cognitive work for us, our brains don't form the neural pathways that would normally encode that information into long-term memory. It's like having someone else do your workout for you, you might get the immediate result (a completed task), but you don't get the long-term benefit (stronger cognitive muscles).
I experienced this personally when I realized I couldn't recall the key arguments from articles I had "read" with AI assistance. The AI had processed the information, summarized it beautifully, and even connected it to other concepts. But my brain hadn't done the work of wrestling with the ideas, so they never became part of my knowledge base.
The Critical Thinking Erosion
Perhaps even more concerning is what's happening to our critical thinking abilities. The Microsoft Research study found that knowledge workers increasingly used critical thinking only for quality assurance and verification rather than for deep analysis or original insight generation [4].
This represents a fundamental shift in the nature of cognitive work, from active creation and analysis to passive evaluation and correction. Instead of generating our own insights, we're becoming quality control inspectors for AI-generated thoughts.
The Nature study of preservice mathematics teachers revealed that AI dependency had significant negative effects on six critical 21st-century skills: problem-solving ability, critical thinking, creative thinking, collaboration skills, communication skills, and self-confidence [3]. These aren't just academic competencies, they're the foundational capabilities that enable people to function effectively in a complex, rapidly changing world.
Think about what this means for how we approach complex problems. Traditional critical thinking involves breaking down problems into component parts, evaluating evidence, making inferences, constructing arguments, and synthesizing information into new insights. When AI tools handle these processes for us, we lose practice with the very skills that enable independent reasoning.
The Creativity Crisis
One of the most heartbreaking losses is what's happening to human creativity and original thinking. Teachers evaluating AI-assisted student work consistently describe it as "soulless", a word that captures something essential about what we're losing.
The research documents a phenomenon called "mechanized convergence," where people using AI tools produce increasingly similar outputs compared to those working independently [4]. The diversity of thought and approach that drives innovation and problem-solving is being systematically eroded.
This isn't just an aesthetic concern. Creativity and original thinking are what enable us to solve novel problems, adapt to changing circumstances, and find new ways of understanding the world. When AI tools push us toward convergent thinking patterns, we lose the cognitive diversity that makes human societies resilient and adaptive.
I've seen this in my own field, where multiple people using AI tools to generate initial analyses often produce remarkably similar work. The AI tools, trained on similar datasets and optimized for similar objectives, naturally push users toward similar conclusions and approaches. The unique perspectives and creative insights that typically emerge from diverse human minds are being homogenized into AI-generated uniformity.
The Switching Cost: When Independence Becomes Impossible
Remember the MIT study's finding about "switching costs", how people who had been using ChatGPT struggled when forced to work without AI assistance? This reveals something deeply troubling about the nature of AI dependency.
Participants who had been using AI tools showed reduced brain connectivity and had difficulty recalling their own previous work when the AI assistance was removed [1]. It was like watching people try to walk after their legs had been in casts for months, the muscles had atrophied from disuse.
But here's what really keeps me up at night: the switching cost appears to be asymmetrical. While people who had been working independently could adapt to using AI tools without major problems, the reverse transition, from AI dependency to independent thinking, was much more difficult.
This suggests that AI dependency might create a kind of cognitive one-way street. Once you become accustomed to AI assistance, returning to independent thinking becomes increasingly challenging. The neural pathways that enable independent cognitive work may actually weaken from disuse.
The Metacognitive Blindness
Perhaps the most insidious loss is what's happening to our metacognitive abilities, our capacity to monitor and regulate our own thinking processes. AI assistance appears to reduce users' metacognitive awareness, as the AI system takes over many of the monitoring and adjustment functions that are essential for independent learning.
When we don't have to think about our thinking, we lose touch with our own cognitive processes. If we can't accurately assess our own understanding, how can we identify knowledge gaps or learning needs? If we can't monitor our own reasoning processes, how can we catch errors or biases in our thinking?
This metacognitive blindness helps explain the perception-reality gap I discussed in the previous post. People using AI tools consistently overestimate their own capabilities and the tools' benefits because they've lost the ability to accurately assess their own cognitive engagement.
The Emotional and Psychological Toll
The research also reveals concerning emotional and psychological dimensions to these cognitive changes. Students report feeling disconnected from their own work, uncertain about their capabilities, and anxious about their future prospects [6].
There's a growing sense that AI tools are creating a generation of students who can produce acceptable work without developing genuine competence. They can generate impressive outputs with AI assistance but struggle to engage in basic academic discussions about their own work.
One student described it to me this way: "I feel like I'm becoming a curator of AI content rather than a creator of original thought. I can make things that look smart, but I don't feel smart."
This psychological impact extends beyond individual students to broader questions of human agency and self-efficacy. When people consistently rely on AI systems to handle cognitive tasks, they may begin to doubt their own intellectual capabilities. The confidence that comes from successfully working through challenging problems independently, what psychologists call self-efficacy, may be undermined by constant AI assistance.
The Workplace Expertise Crisis
The implications for professional development are equally troubling. If AI tools are making even experienced professionals slower and less effective, as the METR study suggests [2], what does this mean for the development and maintenance of human expertise?
Consider the implications for training new professionals. If AI tools interfere with skill development, their use during training periods could have lasting negative effects on professional competency. New hires who learn to rely on AI assistance from the beginning may never develop the independent capabilities that enable them to handle complex, novel problems or advance to leadership positions.
We might be creating a workforce of people who appear competent with AI assistance but are actually cognitively dependent on these tools. What happens when they encounter problems that AI can't solve, or when they need to make decisions in contexts where AI assistance isn't available or appropriate?
The Societal Stakes
The cumulative effect of these individual cognitive changes could have profound societal implications. Democratic participation requires citizens capable of critical thinking, independent reasoning, and thoughtful deliberation. Scientific progress depends on researchers who can generate novel hypotheses, design creative experiments, and synthesize complex information.
If AI tools systematically impair these capabilities, their widespread adoption could undermine the cognitive foundations of democratic society and scientific progress. We could find ourselves in a world where the tools designed to augment human intelligence have instead diminished it, leaving us less capable of addressing the complex challenges we face.
The Urgency of Cognitive Protection
Here's what makes this situation so urgent: these changes are happening right now, in real-time, to millions of people around the world. Every day that passes without better frameworks for AI integration, more students become cognitively dependent, more professionals lose essential skills, and more institutions struggle with challenges they're not equipped to address.
We're not talking about some distant future scenario. We're talking about changes that are already underway, documented by rigorous scientific research, and accelerating as AI tools become more sophisticated and more widely adopted.
Why We Need Neogogy: Protecting Human Cognitive Capabilities
This is why we urgently need what I call "neogogy", a new framework for learning and cognitive development that explicitly protects and develops human thinking capabilities while thoughtfully integrating AI tools.
Traditional educational approaches were designed for a world without AI. They assumed that students would do their own cognitive work, that effort and outcome would be correlated, that assessment could reliably measure learning and capability. AI has shattered these assumptions.
Neogogy starts with the recognition that cognitive development must be the primary goal of education, not task completion or apparent productivity. It acknowledges that AI tools can either enhance or impair human cognitive capabilities, depending on how they're used. It provides frameworks for distinguishing between AI use that supports learning and AI use that replaces it.
Most importantly, neogogy treats the preservation and development of human cognitive capabilities as a moral imperative, not just an educational goal. The capabilities that are being eroded by inappropriate AI use, critical thinking, creativity, memory formation, metacognitive awareness, are fundamental to human agency, democratic participation, and the ability to navigate an increasingly complex world.
We can't afford to lose these capabilities. The research has shown us what's at stake. Now we need to act on what we've learned, before the cognitive heist becomes complete.
In the next post, I'll explore how educational institutions are struggling to adapt to these challenges, and why they need comprehensive frameworks like neogogy to navigate the AI age successfully.
References:
[1] Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv preprint arXiv:2506.08872.
[2] Becker, J., Rush, N., Barnes, E., & Rein, D. (2025). Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. arXiv preprint arXiv:2507.09089.
[3] Zhang, D., Wijaya, T. T., Wang, Y., Su, M., Li, X., & Damayanti, N. W. (2025). Exploring the relationship between AI literacy, AI trust, AI dependency, and 21st century skills in preservice mathematics teachers. Scientific Reports, 15.
[4] Lee, H. P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers. Microsoft Research.
[6] Attewell, S. (2025, May 21). Student Perceptions of AI 2025. National Centre for AI.
Next in this series: "The Classroom Crisis: When Schools Don't Know What Learning Means Anymore" - where we explore how educational institutions are struggling to adapt and why they need comprehensive frameworks for the AI age.