Parents Want More Guardrails on How Schools Use AI With Kids' Data and Classwork, Survey Shows

A national survey finds 8 in 10 parents want stronger AI guardrails in schools, citing concerns over student data privacy and lack of school policies. Pixabay, asuraneo

A national survey of more than 1,500 parents found that 8 in 10 want stronger guardrails on how artificial intelligence is used with their children, reflecting bipartisan concern over student data privacy and the lack of clear school policies on AI in the classroom.

The survey, conducted by Echelon Insights on behalf of the National Parents Union, gathered responses from 1,511 parents of K-12 public school students between Feb. 12 and Feb. 18.

The results show that while 56% of parents believe their children are already using generative AI chatbots such as ChatGPT, Gemini, or Character.ai, they want firmer restrictions around that use. Among parents of high schoolers, the number reaches 68%, according to EdWeek.

The demand for guardrails cuts across political lines. On every proposed protection tested, support exceeded 79%, with nearly identical numbers among Republican-leaning and Democratic-leaning respondents.

Eighty-six percent said AI chatbots should display pop-up warnings before showing minors content related to violence, self-harm, or abuse. Another 85% said chatbots should alert a parent if their child discusses harmful or illegal behavior. And 79% said minors should need parental permission before using an AI chatbot at all.

A significant gap exists between what parents want and what schools are doing. According to the survey, 47% of parents said their child's school has not shared any information about its AI policy, and 57% said they had never been asked for input on how AI is used in their child's school.

Data privacy is a top worry. Most parents said more needs to be done to protect student privacy, inform guardians about what data AI and ed-tech tools collect, and explain how companies use it.

A separate survey by Count on Mothers, which polled 2,290 U.S. mothers, found 39% either did not know their children's data was being collected or did not understand how data collection worked. Only 20% said they understood the privacy risks and knew how to protect their child's data.

The findings come as Congress moves on children's online safety legislation. On March 5, the House Energy and Commerce Committee advanced H.R. 7757, the Kids Internet and Digital Safety (KIDS) Act, in a 28-24 party-line vote, House Energy and Commerce Committee reported.

The bill includes provisions for AI chatbot safety and restrictions on profiling minors. It also incorporates the SAFEBOTs Act, which requires chatbot providers to disclose to minors when they are talking to AI rather than a real person.

The National Parents Union has criticized the KIDS Act, arguing it weakens stronger state-level protections. "This bill does not protect our kids. It protects the companies that are hurting them," said Keri Rodrigues, president of the National Parents Union. "It guts the state laws that are actually working."

© 2026 ParentHerald.com All rights reserved. Do not reproduce without permission.

Join the Discussion