Radnor parents are demanding stronger safeguards and faster communication from school leaders after an AI‑generated video allegedly depicting several high school students in sexualized imagery led to criminal charges and a wave of fear among families.
What Happened at Radnor High?
In early December 2025, Radnor High School and Radnor Township Police began investigating an AI‑generated video that reportedly showed a small group of female students in non‑consensual sexualized imagery.
Principal Joseph MacNamara notified families that the district was treating the situation "with the highest level of urgency and care," and said all families of potentially affected students had been contacted and offered support services.
Investigators later confirmed the video involved "non‑consensual sexualized imagery of multiple juveniles," and by late January 2026, Radnor police announced that a juvenile had been charged with harassment in connection with the video, according to 6ABC.
Authorities declined to release the juvenile's identity but emphasized that the criminal use of AI to harm students would be taken seriously.
How Parents are Reacting
At a recent school board meeting, several parents spoke publicly for the first time since the charges were announced, describing the emotional toll on their children and criticizing how the district handled the incident.
Luciana Librandi, a Radnor parent, said the impact on teenagers' mental health could be long‑lasting and called for clearer, more timely communication in future cases.
Other parents faulted the district for what they described as slow initial action and a lack of transparency about how the video was shared and how far it had spread. Some families said students felt unsafe at school and worried that posted photos could be misused again, even outside school hours, Yahoo News reported.
Read more: AI Risks in Schools: Report Finds Chatbots Pose Privacy and Academic Integrity Threats in Classrooms
Policy Changes and AI Education
In response to the backlash, the Radnor Township School District's Policy Committee has scheduled discussions on AI, bullying, and harassment, with a focus on updating acceptable‑use and technology policies.
Parents are urging the district to develop clearer rules about how students and staff should respond to AI‑generated content, including faster reporting paths and stronger support for victims.
Many parents also want the district to give age‑appropriate lessons on AI misuse, consent, and digital safety, starting in middle school. They argue that schools must prepare students for a world where AI tools can be used to exploit images, rather than waiting for the next crisis to change policy.
Broader Concerns About AI and Schools
The Radnor case has become part of a larger national conversation about how schools should respond to AI‑driven harassment and non‑consensual imagery.
Pennsylvania Governor Josh Shapiro recently signed a law that bans the creation and distribution of deepfake sexual content involving minors or non‑consenting adults, and federal "Take it Down" rules now treat some AI‑generated explicit images as criminal offshore.
Parents in Radnor say they are not alone in feeling unprepared, but they stress that their district must act quickly to restore trust. They hope the deepfake scandal will force both local and state officials to strengthen AI protections, improve communication, and make student safety the top priority in the era of generative artificial intelligence, as per Patch.
