The AI in schools moratorium debate is growing as experts warn of risks to student learning, mental health, and widening inequality, especially for Latino students.
A coalition led by Fairplay has called for a five-year pause on the use of generative AI in schools. It is one of the strongest public challenges to AI in education so far—and it is likely to shift the conversation quickly.
The real question is not just whether they are right, but whether schools are moving faster than they truly understand.
An article by The AI School Librarian on Substack, which analyzes the impact of AI in education, summarizes the issue:
What’s Happening
“The proposal is direct. A broad coalition of organizations and experts is calling for a complete pause on student-facing generative AI in PreK–12 schools. Their concerns are wide-ranging and center on how these tools may be affecting students in ways that are not yet fully understood.
“They raise questions about cognitive development, including whether reliance on AI tools may weaken critical thinking and problem-solving. They point to concerns about social and emotional development, arguing that learning is grounded in human interaction. They also highlight risks related to student mental health, academic integrity, and data privacy.
“At the center of their argument is a point educators should not ignore: there is still limited long-term evidence that generative AI improves student learning outcomes.
“Much of the current momentum is built on what these tools might do. That is very different from what we know they actually do.”
For this reason, doctors and education experts studying AI’s impact on young people are calling for a five-year moratorium in schools.
Researchers, clinicians, and child development specialists have examined how generative AI affects developing brains. Their conclusion: it should not be introduced into classrooms without far stronger evidence—and action should be taken now.
“We just don’t want to waste another 10 years in which our kids’ education is undermined,” said Leonie Haimson, co-chair of the Parent Coalition for Student Privacy. “It took more than 10 years to ban cell phones from schools. We can’t afford that again.”
Boston-based child advocacy nonprofit Fairplay is leading a coalition of more than 250 experts and organizations calling for a five-year moratorium on all student-facing generative AI products in PreK–12 schools across the U.S. and Canada. The group—composed of mental health experts, parents, educators, and child protection organizations—warns that any product failing safety testing during that period should be permanently banned.
“It’s an unproven, untested product, and we’re giving it to children in the name of improving education, equity, or cognition—none of which have been proven,” said Josh Cherkin of Fairplay. “If a children’s hospital told parents, ‘We have a new drug with potential—just trust us,’ people would be horrified. We have strict vetting processes in many industries, yet we are allowing generative AI companies access to our most vulnerable population.”
The experts’ core finding is that AI doesn’t just distract children—it may actively interfere with critical stages of development. The human brain is not fully formed until the mid-twenties, and the prefrontal cortex—responsible for planning, reasoning, emotional regulation, and critical thinking—is among the last regions to mature.
Latino Students and AI
AI education for Latino students faces significant challenges, largely due to existing systemic inequalities that are amplified by new technologies. These challenges include limited access to infrastructure, as well as cultural and linguistic biases embedded in AI tools.
Many schools serving Latino communities—both in Latin America and the U.S.—lack reliable internet, updated hardware, and technological resources, limiting equitable access to AI.
Underrepresentation of Latinos in the tech sector contributes to biased systems that may not reflect their cultural or educational needs.
AI tools also struggle with Spanish-language content. Misinformation in Spanish is flagged less frequently than in English, increasing students’ exposure to unreliable information.
Additionally, English learner (EL) students are more likely to be falsely accused of using AI in their assignments, disproportionately affecting them compared to their peers.
Wait, Then Implement
According to the Harvard Educational Review, the challenge is not only technological but educational. It reflects a gap in who has the skills to critique algorithmic outputs, refine prompts, and use AI as a tool for creation rather than passive consumption.
If AI is integrated into well-resourced schools while others lack access, learning gaps could widen dramatically, leading to long-term economic and intellectual disparities.
A joint MIT–Harvard study found that AI use can accumulate “cognitive debt,” weakening independent thinking over time. Similarly, OECD research shows that students who use tools like ChatGPT for studying often perform worse on tests than peers without access—even when the AI is designed not to provide direct answers.
Mental health concerns are also mounting. Companies like Google and Character.AI face lawsuits alleging their chatbots contributed to user suicides and encouraged harmful behavior. The American Psychological Association has issued a health advisory on AI and adolescent well-being.
The report notes a stark contrast: teachers, therapists, and counselors must meet licensing and ethical standards to work with children, while generative AI tools face no equivalent requirements—despite evidence of ethical violations in mental health contexts.
Finally, under-resourced schools may be more likely to rely on AI as a substitute for human teachers, while better-funded schools continue to prioritize human instruction. Because AI systems are trained on historically biased data, the report warns they are more likely to reinforce educational inequalities than reduce them.
A February 2026 Pew Research Center survey found that 60% of teenagers say students at their schools use chatbots to cheat “very often” or “somewhat often.”
Too Much Tech, Not Enough Learning: Schools Rethink the Digital Classroom







