Anyone who has spent time around students in the last two years has seen the shift. What used to begin with a Google search now often begins with a prompt. A student gets stuck, opens ChatGPT, asks for a summary, an explanation, an outline, a cleaner version of their paragraph, or just the answer in simpler words.
That is not a small behavioral change. It is a cognitive one.
AI has quietly become part of the learning environment, whether institutions are ready or not. The question now is no longer whether students will use it. They already do. The real question is what AI is becoming in practice: a tutor, a crutch, or a cognitive trap.
AI is already normal in student life
One of the mistakes educators still make is talking about AI as if it were an optional future issue. It is already embedded in study habits. The speed of adoption alone should end the fantasy that higher education can respond with simple bans or vague warnings.
Figure 1. Student use of AI in higher education. Suggested placement: right after the opening section, to establish that AI is already normal rather than marginal.
The point is not just that students are using AI more. It is how they are using it. In the HEPI and Kortext 2025 student survey, almost all students reported using AI in some form, and 88% had used generative AI in assessments. The most common uses included explaining concepts, summarizing articles, and suggesting research ideas. That mix is exactly why simplistic reactions fail. These tools are not used only for cheating and not only for learning. They sit in the messy middle, where help, dependence, efficiency, and avoidance all blur together.
The strongest case for AI: it can function like scalable tutoring
This is the part critics often understate. AI really can help people learn.
Used well, it can explain concepts in plain language, adapt to the learner’s pace, provide instant feedback, generate extra practice, and remain available long after office hours are over. For students who are hesitant to ask questions in class, studying late, or working from weak academic foundations, that matters. AI can lower the friction of asking for help.
That is why the “tutor” analogy is compelling. It captures something real. A good AI interaction can resemble the best parts of tutoring: clarification, scaffolding, repetition, and responsiveness. OECD’s 2026 Digital Education Outlook makes this point carefully. It argues that generative AI can support learning when guided by clear teaching principles and that educational AI designed with pedagogical intent can improve learning more consistently than general-purpose chatbots used without structure.
This is the version of AI that deserves serious attention. Not AI as automatic answer machine, but AI as guided support for practice, reflection, and understanding.
But performance is not the same as learning
The problem begins when better output is mistaken for better understanding.
This is where the “crutch” concern becomes legitimate. Students can complete tasks with AI that they cannot yet perform independently. That may be useful in some contexts. It may even be productive in the short term. But it also creates the risk of false mastery: the student feels competent because the final product looks competent.
That is not a theoretical worry anymore. OECD’s 2026 report explicitly warns that general-purpose GenAI tools may improve task performance without producing real learning gains, and that students’ output advantages can disappear, or even reverse, when AI access is removed. In other words, AI can help a learner perform above their current level without necessarily helping them reach that level.
That distinction matters more than the hype cycle admits.
The cognitive trap: when convenience replaces effort
The most serious version of the argument is not that AI makes students lazy. That framing is too moralistic and too shallow. The deeper problem is cognitive offloading.
When students routinely outsource summarizing, structuring, brainstorming, drafting, and even initial interpretation to AI, they may reduce the very mental effort that makes learning durable. Memory, transfer, and conceptual understanding are not built only by seeing good answers. They are built by retrieval, struggle, error correction, and reconstruction.
A 2025 randomized controlled trial makes this tension hard to ignore. Undergraduate students who used ChatGPT as a study aid scored significantly lower on a surprise retention test 45 days later than students who used traditional, non-AI study methods.
Figure 2. Knowledge retention after studying with and without ChatGPT. Suggested placement: immediately after the paragraph introducing the randomized trial, because this is the article’s sharpest evidence for the “cognitive trap” argument.
That does not prove that AI always harms learning. It does prove something more useful: if AI use reduces cognitive effort at the wrong stage of learning, students may feel helped while actually retaining less.
That is what makes AI dangerous in education. Not because it is evil, but because it is fluent. It can make incomplete understanding look finished.
My own view from the classroom side
From my experience in Recife, in academic and teaching support settings, the appeal of AI is easy to understand. Many students are not looking for intellectual shortcuts in some grand dishonest sense. Often, they are overwhelmed, underconfident, behind schedule, or coming from fragile educational foundations. When a tool offers immediate explanation without embarrassment or delay, of course they use it.
That is why the conversation needs more seriousness and less moral panic. Students are not using AI only because they are careless. Many are using it because the educational environment is already failing to give them enough feedback, enough time, enough support, or enough room to ask basic questions without fear.
But that does not erase the risk. In fact, it sharpens it. The more fragile the educational foundation, the more dangerous it becomes to confuse assisted performance with real learning.
The institutional mismatch is getting worse
The adoption curve is moving faster than institutional adaptation. That gap is now obvious.
A 2025 student survey found AI use had surged dramatically in higher education. Meanwhile, a 2026 AAC&U and Elon University faculty survey found overwhelming concern that generative AI will increase student overreliance, weaken critical thinking, shorten attention spans, and intensify cheating issues.
Figure 3. Faculty concerns about generative AI in higher education. Suggested placement: after the institutional mismatch section, because it shows that the anxiety is not abstract or fringe.
This gap matters because education systems often respond too slowly and too vaguely. UNESCO has been warning for some time that AI in education requires a human-centered approach, with attention to ethics, privacy, bias, age-appropriateness, and pedagogical design. More recently, UNESCO also stressed that AI is reshaping education unevenly, with access, language, infrastructure, and institutional preparedness distributed very unequally across contexts.
So the real divide is no longer between people who use AI and people who do not. The real divide is between those who are learning how to use AI critically and those who are letting AI quietly reorganize their learning habits for them.
So what is AI, then?
It can be all three.
AI is a tutor when it scaffolds thinking, checks understanding, adapts explanations, and helps the learner do more of the cognitive work, not less.
It is a crutch when it temporarily helps someone move despite weakness, but is used so often that the weakness never gets repaired.
It becomes a cognitive trap when fluent assistance creates the illusion of competence while weakening retention, persistence, and independent reasoning.
That is the standard education should adopt. Not “Is AI good?” Not “Should students be allowed to use it?” Those questions are already too blunt for the reality in front of us.
The better question is this: does this use of AI increase the learner’s future independence, or just improve the quality of the current output?
If the answer is only output, then the tool may be helping performance while quietly undermining learning.
What responsible use actually looks like
If institutions want AI to function more like a tutor than a trap, they need to stop treating policy as the whole response. Students need models of use, not just rules.
That means designing assignments where process matters, not just product. It means asking students to explain why an answer works, not only submit the answer. It means encouraging AI for guided practice, self-quizzing, clarification, comparison of ideas, and feedback on drafts, while being much more careful about using it to replace reading, reasoning, outlining, and synthesis too early in the learning process.
It also means teaching AI literacy as part of learning literacy. Students should know how these systems fail, where they hallucinate, how they flatten complexity, and why speed can create false confidence.
The future of learning with AI will depend less on the model itself than on whether educators and students learn to protect the mental work that learning still requires.
Conclusion
AI is not going away. In many contexts, it is already the default study partner.
That makes the debate more demanding, not less. The challenge is no longer whether AI belongs in education. The challenge is whether education can integrate AI without hollowing out the cognitive effort that gives learning its value.
AI can absolutely support learning. It can widen access to explanation, reduce friction, and offer forms of tutoring that many students never had.
But if education confuses smoother performance with deeper understanding, AI will not fix the learning crisis. It will make it harder to see.
That is why the real educational task is not choosing between optimism and fear.
It is learning to tell the difference between assistance and dependency.
Sources
- HEPI; Kortext. Student Generative AI Survey 2025.
- OECD. OECD Digital Education Outlook 2026: Exploring Effective Uses of Generative AI in Education.
- Barcaui, André. ChatGPT as a cognitive crutch: Evidence from a randomized controlled trial on knowledge retention.
- AAC&U; Elon University. The AI Challenge: A faculty survey.
- UNESCO. Guidance for generative AI in education and research.
- UNESCO. AI and the future of education: disruptions, dilemmas and directions.



Top comments (0)