Once a buzzword for students, AI has become an everyday learning tool. Homework assistance, language help, and summarising lessons have pushed young leaders worldwide to rely heavily on these AI-powered tools. While this is extremely convenient, there are risks. 

The BrightCHAMPS ‘StudentsSpeakAI’ global survey, which interviewed 1,425 students across 29 countries, reveals a worrying trend: while 58% of students use AI for their studies, nearly 29% never cross-check AI-generated answers, and 23% cannot distinguish between real and AI-generated content.

This raises important questions about media literacy, critical thinking, and the role of educators in shaping the students’ future, where distinguishing fabricated truth from facts could become an essential awareness. 

The Double-Edged Sword

The survey results indicate both opportunity and risk. While 59% of students see learning AI as vital for future readiness, many lack the skills to critically assess AI outputs. About 20% admitted to believing false AI information before later discovering it was incorrect.

For Sweena Mangal, senior AI educator at BrightCHAMPS, this isn’t just about technology, it’s about the fundamental ability to think. “A child’s ability to think critically can be sharpened and honed perfectly well as long as we teach them how the tech behind AI works,” she explained. “When they understand the ‘how’ behind the ‘what’, they will understand and appreciate the ‘why’”.

AI, she added, makes it easier for students to trust information because of how convincingly it mimics human language. That makes fact-checking not a nice-to-have, but a necessity.

The Teachers’ Role

If students are to thrive in this new environment, teachers need to rethink their approach to education. For Mangal, the answer lies in Socratic dialogue.

“If a teacher can instil in a child the ability to look at a topic from multiple vantage points and arrive at their position on the subject after engaging with information that challenges, even contradicts their way of thinking… they’ve done their job,” she said. 

“Because when a child gets into the habit of looking at something from different angles, they also develop the habit of seeking more information, of being prepared, and, most importantly, not getting so attached to one way of thinking that they are unable to change when needed.”

Recognising Biases

One of the grave risks with AI is hidden bias. Algorithms are trained on datasets that often reflect historical and cultural inequalities. Mangal emphasised that bias detection and awareness should be woven into everyday teaching, rather than being treated as an “extra burden.”

“With so much internet access and platform algorithms optimising for stickiness of content over variety of viewpoints or the veracity of truth, it’s far too easy for children to grow up with a unidimensional understanding of the world,” she argued. “If we, as educators and parents, don’t sensitise them to the fact that historically marginalised communities and regions of the world continue to be drastically under-represented on the internet… Who will?”

The survey also found that 12% of students now use AI as their primary mode of online search. While this highlights AI’s growing centrality in learning, it also reveals a creeping dependency that could erode independent thought.

Mangal warns against the consequences: “Over-relying on any tech or one medium impacts an individual’s ability to engage with the topic or question at hand in totality. We all know adults who think of themselves as experts on a subject after reading two paragraphs on Wikipedia. Are they really any different from students who might be over-relying on AI answers?”

Unless addressed, this reliance risks producing a generation less inclined to research deeply, analyse critically, or innovate meaningfully.

Should AI Literacy Be a Core Subject?

Mangal agrees that AI should be part of school curricula, but stresses that teaching AI literacy must go beyond technical usage.

“It needs to be a combination of learning how the technology works + the flaws/WIP nature of the technology + the ethics of it all,” she said. “If an AI curriculum is not getting updated regularly, given how rapidly the tech is developing, it can’t be a valuable one, in my opinion.”

For her, meaningful AI literacy is not just about coding or algorithms, but about instilling an understanding of ethics, bias, and the global impact of technology.

The Role of EdTech Players

While BrightCHAMPS sheds light on student behaviour, companies like ViewSonic are working to provide tools that could support teachers in addressing these challenges in classrooms. 

Muneer Ahmad, vice president, AV business at ViewSonic India, highlighted their efforts: “We believe students shouldn’t just passively consume AI-generated content, they should learn to question it, compare it, and think critically about it. To enable this, we’ve designed our solution for Indian educators with ViewLessons AI Studio and myViewBoard 3.0, offering tools that foster deeper engagement and meaningful learning experiences.”

These platforms, equipped with curriculum-aligned lessons, interactive annotations, and multilingual support, claim to enable teachers to demonstrate to students how to validate and challenge AI-generated content in real-time.

ViewSonic also recognises that educators are under constant pressure. “We understand how challenging it can be for teachers to keep up with rapidly evolving technology. That’s why teacher training and ongoing support are at the heart of our education strategy. We provide both online and offline training programs that make AI integration more approachable and practical for classrooms,” Ahmad said.

Along with initial training, the company provides ongoing technical and pedagogical support to help educators guide students in questioning and applying AI responsibly, he added. 

Ethical Guardrails and Classroom Safety

Technology companies also face the responsibility of ensuring that students are safe as they learn with AI. Ahmad stressed that child safety and data privacy are top priorities for ViewSonic. Beyond that, he explained, “we focus on designing resources with age-appropriate, bias-free content so that learning outcomes are fair and inclusive for all students.”

Importantly, interactive classroom solutions are also making it easier to teach abstract concepts, such as deepfakes and misinformation. Ahmad pointed out: “When teachers use digital whiteboards to place an authentic piece of information alongside a manipulated version, students can see the differences for themselves in real time… This approach makes the risks of misinformation relatable without being intimidating”.

What’s at Stake?

The survey also makes one thing clear: today’s children are tomorrow’s AI natives. Their ability, or inability, to critically engage with AI will shape not only their careers but the health of entire societies.

If students grow up unable to distinguish between truth and fabrication, the consequences extend beyond the classroom to democracy, civic trust, and even national security. As Mangal put it, “If the goal is to help children question the world around them, we need to start with equipping them with information that empowers them to question us.”

Educators, policymakers, and edtech companies alike must treat AI literacy not as a niche subject but as a core competency for the future of humanity.

The post AI Literacy in Classrooms: Why Fact-Checking and Critical Thinking Can’t Be Optional appeared first on Analytics India Magazine.