Table of Contents
Highlights
- Indian universities are adopting structured GenAI policies to promote responsible LLM use instead of banning AI tools.
- LLMs offer major benefits like personalized tutoring, content generation, and reducing teacher workload—especially in large classrooms.
- Challenges such as hallucinations, digital divide, bias, and weak infrastructure show the need for strong ethics, training, and regulation.
Generative AI is here, and this is not just the concern of a tech firm; it is already being rolled out in Indian classrooms. Large Language Models (LLMs) such as ChatGPT, Gemini, and others are being embraced as a tool for tutoring, help with assignments, content generation, and so on. The question is – will these tools support education in India, or will they create new risks that schools are not ready to take on?

This article looks at how LLMs are currently used in education in India, what evidence informs us, and then it will assess whether LLMs are friends or foes in the classroom.
Using Shiv Nadar University as a case in point, in 2025, the institution rolled out a “GenAI Assessment Scale,” which articulated expectations of how students may apply AI in their coursework with levels ranging from prohibited use of AI to “responsible autonomy,” where students are permitted to freely use AI, as long as they disclose their prompts and check for accuracy.
This decision to adopt a new framework represents a move beyond fear and into a more structured way of supporting skills development. At Ashoka University, faculty members are including AI literacy into courses directly. The philosophy department offers a course on the ethics of artificial intelligence, while many assignments have been redesigned so that students are required to demonstrate their thinking, rather than simply submitting the answers generated by AI.
However, there is a considerable problem: the infrastructure. Connectivity is a significant barrier to scaling AI in education. Across several schools, particularly in rural settings, there is either no or an unreliable high-speed Internet connection. A recent thematic analysis of volunteer teachers of K-12 students residing in rural areas of Rajasthan and Delhi uncovered this issue.
The Upside: What LLMs Can Offer to Indian Classrooms
One-on-one Tutoring:
LLMs can act as tutors, as needed, shaping their explanations based on a student’s question. They can clarify concepts, generate examples, and facilitate revision implications, all of which are helpful when a teacher’s class size is much larger than intended.
Idea Development & Writing Support
Students can engage with AI to organize essays, design projects, or even generate prompts. This process can be particularly useful when learners get stuck at the starting point of an assignment.
Content Summarization
LLMs can summarize long reading assignments – for example, chapters, research articles, or lecture notes that can help students synthesize information quickly and assist with understanding the material in advance or as a review.

Ease for Teachers:
Teachers can use LLMs to create lesson plans, generate assessment content, or generate other pre-planned pedagogical material. This can alleviate work routines and allow teachers to focus on other pedagogical or experiential activities that could prove more beneficial.
Ethical Awareness
As AI tools are embedded into an educational curriculum, institutions have the opportunity to educate their students on not just how to use AI, but also to demonstrate when and why to critically consider outputs.
The Dangers & Challenges
But this beauty is not without warnings.
Hallucinations & Misinformation
LLMs can produce information that is seemingly plausible but, in fact, incorrect and misleading. When these are used in an educational context, there are some serious risks. As one expert writes, LLMs can “hallucinate,” in which the model produces a novel reference; therefore, fact-checking is essential.
Over-Reliance
Students may rely upon an LLM too much to produce ideas or generate content for any assignment, which could lead to students missing chances to learn. There is a risk of critical thinking or deep learning becoming compromised in exchange for ease.
Digital Disparity
Not every institution has the technology to engage in LLM-based learning. In rural India, connectivity, digital literacy, and teacher development are structural barriers that impede usage.
Bias & Representation
LLMs are trained on datasets that might reflect sociocultural or language-based biases. Used without reflection or intention, LLMs could reinforce stereotypes or privilege a skewed perspective.
Erosion of the Teacher Role
There are concerns that AI may diminish teachers’ roles; however, especially among teachers, many feel LLMs should support classrooms’ engagement and capacity, not replace them (e.g., “time may be freed for mentorship and discussion, human warmth”).

Regulatory & Ethical Frameworks
Policies are not clearly defined. There needs to be oversight and structure around issues of identity data privacy, what is or is not acceptable usage of students’ data, transparency, and accountability.
Ground-Level Realities: Voices from Indian Classrooms
The study of rural K-12 volunteer teachers displays a complex position; there is potential, but a shade of caution is warranted.
Teacher Workload: Many teachers felt LLMs had potential for reductive repetitive administrative burden (i.e., generating study materials) but insisted that serious and full training was essential for meaningful and functional use of this technology.
Skepticism: Concerns varied from over-reliance to possible misuse for cheating. Without structures of mediation, some teachers expressed concern that students may well use AI to cheat.
Infrastructure: Despite enthusiasm, many educators cited reliable connectivity, variable power supply, and lack of devices as impediments to adoption.
Localization: Teachers emphasized the need for culturally relevant AI models. LLMs trained on English data may fail to meet regional language/cultural and/or pedagogical needs. At the same time, a broader teacher sentiment throughout India also indicates optimism about the future with AI.
A report indicates that over seventy percent of teachers believe generative AI will be used for education going forward. Optimism, nonetheless, is coupled with calls for governance and definition.
Moving Forward: Friend, But Not Without Guardrails
Balanced Policies: Education institutions must develop clear AI policies that are applicable to their particular context. Consider, for example, the “GenAI Assessment Scale” from Shiv Nadar University, which establishes both boundaries for responsible use and promotes responsible use.
AI Literacy & Ethics: Teaching students how to prompt an LLM is not sufficient. Rather, students need conceptualizations for evaluating and interrogating the outputs of AI. This can involve ethics courses, critical thinking assignments, and AI-specific literacy resources.
Infrastructure: In order to mitigate deepening inequality, governments, schools, and edtech partners must ensure that connectivity, devices, and training are available in underserved areas.
Teacher Training: Professional development should empower teachers to leverage LLMs effectively: not only to relieve work, but to enrich learning and teaching.
Regulatory Frameworks: Regulations should address students’ data protection, model transparency, and usage strategies. With that in mind, a collaborative process with educators, policymakers, tech firms, and civil society will be key. Localised models supporting the development of AI models trained on Indian languages, culturally relevant materials, and local contexts can help ensure LLMs are at least more equitable and relevant.

Conclusion
LLMs in an Indian classroom are not as simple as “friend or foe,” they are a nuanced, developing force. If used well, they have a great deal to offer: personalization, efficiency, and new ways of learning. However, the challenges, bias, and infrastructure, ethical usage, and over-reliance are very real. If education stakeholders in India view AI as a partner rather than a shortcut, and create systems that support both students’ and teachers’ knowledge, LLM will be a “friend”. However, this requires being responsible: creating thoughtful policies, using ethical design, and advocating for equity. At the end of the day, AI is about enhancing the capacity of humans, not replacing them.