Outline:
The Evolution of Cheating and the Rise of AI in Education
In the 1980s and 1990s, students who struggled with schoolwork had to put in effort to cheat. They could ask a sibling for help, find an answer key, or come up with excuses like “my dog ate my homework.” The internet made things easier, but not effortless. Sites like CliffNotes and LitCharts allowed students to skip reading by using summaries. Homework-help platforms such as GradeSaver or CourseHero offered solutions to math problems.
The common thread among these methods was effort—there was always a cost to not doing the work. Sometimes it took more time to cheat than to do the work itself.
Today, the process has simplified into three steps: log on to ChatGPT or a similar platform, paste the prompt, and get the answer. This ease has raised concerns among experts, parents, and educators about the impact of AI on learning.
The Brookings Report: A Warning About AI’s Impact
A recent report from the Brookings Institution warns that AI is causing a “great unwiring” of students’ brains. The report, titled “A New Direction for Students in an AI World: Prosper, Prepare, Protect,” highlights the risks of AI’s ability to make cheating too easy. It argues that AI’s qualitative nature—such as cognitive atrophy and the erosion of relational trust—currently overshadows its potential benefits.
One teacher interviewed for the study lamented, “Students can’t reason. They can’t think. They can’t solve problems.”
The report is based on a yearlong “premortem” conducted by the Brookings Institution’s Center for Universal Education. It draws on hundreds of interviews, focus groups, expert consultations, and over 400 studies to assess how generative AI is reshaping education.
The “Fast Food of Education”
The report describes AI as the “fast food of education.” In traditional classrooms, the struggle to synthesize multiple papers or solve complex problems is where learning occurs. By removing this struggle, AI provides convenient answers that are satisfying in the moment but lack long-term cognitive value.
For students, the situation is reversed compared to professionals who use AI to assist with tasks they already know how to do. Children are offloading difficult tasks onto AI, leading to a phenomenon called “cognitive debt” or “atrophy.” One student said, “It’s easy. You don’t need to use your brain.”
Economically, students are rational consumers seeking maximum utility at the lowest cost. The education system, as it stands, rewards this behavior. High-achieving students are pressured to use AI to improve their grades, creating a positive feedback loop that reduces critical thinking skills.
The Rise of “Passenger Mode”
Researchers describe students as existing in a state called “passenger mode,” where they are physically in school but have effectively dropped out of learning. They do the bare minimum necessary.
Jonathan Haidt once described earlier technologies as a “great rewiring” of the brain. Now, experts fear AI represents a “great unwiring” of cognitive capacities. The report notes a decline in mastery across content, reading, and writing—the “twin pillars of deep thinking.”
Reading skills are particularly at risk. The capacity for “cognitive patience”—the ability to sustain attention on complex ideas—is being diluted by AI’s ability to summarize long-form text. One expert noted the shift in student attitudes: “Teenagers used to say, ‘I don’t like to read.’ Now it’s ‘I can’t read, it’s too long.’”
Similarly, in writing, AI is producing a “homogeneity of ideas.” Research comparing human essays to AI-generated ones found that each additional human essay contributed two to eight times more unique ideas than those produced by ChatGPT.
The Debate Over AI and Cheating
Not every young person sees AI as cheating. Roy Lee, the CEO of AI startup Cluely, was suspended from Columbia after creating an AI tool to help software engineers cheat on job interviews. In Cluely’s manifesto, Lee admits that his tool is “cheating,” but says, “So was the calculator. So was spellcheck. So was Google. Every time technology makes us smarter, the world panics.”
However, the researchers argue that while calculators or spellcheck are examples of cognitive offloading, AI “turbocharges” it. Large language models (LLMs) offer capabilities beyond traditional productivity tools, extending into domains previously requiring uniquely human cognitive processes.
The Rise of “Artificial Intimacy”
Despite its usefulness in the classroom, the report finds that students use AI even more outside of school, warning of the rise of “artificial intimacy.” With some teenagers spending nearly 100 minutes a day interacting with personalized chatbots, the technology has moved from being a tool to a companion.
Chatbots like Character.ai use “banal deception” to simulate empathy, part of a burgeoning “loneliness economy.” These bots provide a simulation of friendship without the need for negotiation, patience, or the ability to sit with discomfort.
“We learn empathy not when we are perfectly understood, but when we misunderstand and recover,” one Delphi panelist noted.
For students in extreme circumstances, like girls in Afghanistan banned from physical schools, these bots have become a vital “educational and emotional lifeline.” However, for most, these simulations of friendship risk eroding “relational trust” or, in the worst cases, being dangerous.
The Path Forward: A Three-Pillar Framework
While the Brookings report presents a sobering view of the “cognitive debt” students are experiencing, the authors remain optimistic that the trajectory of AI in education is not yet set in stone. They propose a three-pillar framework:
- PROSPER: Transform the classroom to adapt to AI, using it to complement human judgment and ensure the technology serves as a “pilot” for student inquiry instead of a “surrogate.”
- PREPARE: Build a framework for ethical integration, moving beyond technical training toward “holistic AI literacy” so students, teachers, and parents understand the cognitive implications of these tools.
- PROTECT: Call for safeguards for student privacy and emotional well-being, placing responsibility on governments and tech companies to reach clear regulatory guidelines that prevent “manipulative engagement.”
