When Machines Do the Thinking - The Hidden Cost of AI in Education

# Dharma Today

When Machines Do the Thinking - The Hidden Cost of AI in Education

10 April, 2025

|

1296 words

share this article

Are we preparing students not to be wiser, but simply more optimized?
Not more reflective, but more “prompt ready”?
Not more social, but increasingly isolated behind screens and “smart” interfaces?

In a course I teach at a liberal arts University, I asked students to write a reflective essay about a personal cultural experience that changed them. What I got back was unsettling — far too many reflective pieces had the polished, impersonal sheen of AI. The sentences were smooth, the tone perfectly inoffensive, but missing the raw, uneven edges of real student writing. When I asked a few of them about their process, some admitted using AI tools like ChatGPT as a “starting point,” others as an “editor,” and a few simply shrugged, “it gave me the answer.” The underlying sentiment was clear: why struggle when you can get it done?

This isn’t just happening in my classroom.

Professors everywhere are facing a generation of students who carry instant “answers” in their pockets, bypassing the struggle that deep thinking, reflection, and real learning demand. AI isn’t just helping with assignments anymore — it’s writing discussion posts, solving problem sets, even drafting essays before class. What we’re seeing is not just a technological shift — it’s a cultural one.

But what do we make of this shift — from thinking, to outsourcing thought?

Since 2012, standardized assessments across high-income countries have revealed a troubling phenomenon: a measurable decline in reasoning and problem-solving abilities among young adults. The data is stark: 25% of adults in developed economies, and a staggering 35% in the U.S., now struggle with basic mathematical reasoning. According to Financial Times journalist John Burn-Murdoch’s piercing analysis, “Have Humans Passed Peak Brain Power?” - this decline is not due to biology, nor environment. It’s something more insidious: it’s due to how technology is reshaping our cognitive capacities.

Where once we immersed ourselves in deep reading and reflective analysis, we now live in the age of the scroll. Algorithmically curated feeds dictate our attention, fragmenting our thoughts into 280-character conclusions and ten-second clips. Fewer than half of Americans read a single book in 2022. This isn’t just a change in habit; it’s a shift in the architecture of our cognition.

We are witnessing a silent, collective decline of attention span, memory, and conceptual depth. And this crisis is now bleeding into education.

The Gyankunj Case: Technology Without Pedagogy

My concern, however, is not limited to elite university campuses. In a study that I conducted with my professor and a colleague in Gujarat, evaluating the Gyankunj program — a flagship initiative to integrate technology into government school classrooms — we found that students exposed to smartboards and digital content actually fared worse in mathematics and writing compared to their peers in classrooms without digital tools.

The reasons were sobering. Teachers had not been adequately trained in using these technologies. Mathematics, which requires cognitive scaffolding and immediate feedback, suffered because the teacher was reduced to a passive facilitator of pre-designed content. Writing, an intensely human process involving revisions, suggestions, and encouragement, became mechanical. What we observed was not enhanced learning, but the opposite — a disconnect between medium and method.

This points to a deeper malaise: techno-optimism. The idea that technology—particularly AI—can be a panacea for all educational woes. But this optimism, bordering on faith, blinds us to the relational, dialogic, and hands-on aspects of learning.

AI in the Classroom: Aid or Anesthetic?

At first glance, the rise of AI in education seems promising:

  • Personalized content delivery.
  • Adaptive testing.
  • Automated grading.

But step closer, and the cracks appear. There’s a growing belief, often fuelled by venture capital and consultancy jargon, that algorithms can fix education. That AI tutors, avatars, and dashboards can replace the “inefficiencies” of human teaching. That every child’s mind can be optimized, like a logistics chain.

This is the creeping logic of techno-solutionism: where students become data points, teachers become facilitators of algorithms, and learning is reduced to behavioral engineering.

Learning Is Human

Let us not forget: pedagogy is not content delivery. It is a relational, embodied, and context-rich process. It depends on trust, dialogue, spontaneity, eye contact, missteps, and encouragement. No AI system, no matter how sophisticated, can replicate the chemistry of a teacher who senses a student’s confusion and adapts - not by code, but by care.

AI is now entering primary education spaces as well. I have seen prototypes where storybooks are narrated by AI voices, children’s drawings are corrected by algorithms, and writing prompts are generated automatically. But what happens to play-based learning? To dirtying one’s hands with clay, engaging with textures, shapes, and emotions? Indian educators like Gandhi, Tagore, and Gijubhai Badheka emphasized the necessity of experiential, tactile learning in early years.

For them, education was not information-delivery but saṃskāra — a holistic shaping of character, curiosity, and community.

Similarly, Śrī Aurobindo emphasized that education must arise from the svabhāva of the child, guided by the inner being - not imposed templates. Can an algorithm, however sophisticated, grasp this uniqueness?

J. Krishnamurti, in his talks on education, famously questioned whether any system, however well-designed, could ever nurture freedom. For him, true learning happened in freedom from fear, not in efficient content delivery. If AI’s omnipresence in classrooms creates an atmosphere where mistakes are quickly corrected, paths auto-completed, and creativity constrained by what’s already been done, are we not curbing the learner’s inward growth? In reducing learning to clicks, nudges, and “correct answers,” are we not slowly extinguishing the inner flame?

Walking the Tightrope

And yet — let me be clear — I am neither a techno-skeptic nor a techno-romantic. The use of AI in education, when done thoughtfully, has made certain forms of learning more accessible and visual. Diagrams, simulations, and language support systems have helped many students grasp complex ideas. It can assist teachers in planning. It can support students with special needs.

But it should remain a tool, never the foundation. A servant of learning, not its substitute.

When we let AI take over too much, too early, we risk producing what Byung-Chul Han (2015) called a “burnout society”: hyper-optimized, but hollow. A society built on the neoliberal promise of optimization, self-tracking, and constant performance, leading to its eventual collapse. When we raise children in screen-first environments, we risk creating what Jonathan Haidt (2024) now identifies as an anxious generation: digitally fluent but emotionally fragmented, constantly grappling with the overexposure to screens, metrics, and digital surveillance.

So we have to ask: Are we preparing students not to be wiser, but simply more optimized? Not more reflective, but more “prompt ready”? Not more social, but increasingly isolated behind screens and “smart” interfaces?

The challenge ahead is not technological. It is existential.

Will we nurture depth, or distraction?

Freedom, or feedback loops?

A sense of self, or a sense of being constantly scored?

References

Latest Posts