
Los Angeles parents and school officials are on high alert as researchers report a spike in teenagers using easy-to-access AI tools to whip up sexualized images of classmates. What some students may brush off as a prank can quickly cross the line into bullying, sextortion, or even criminal exposure for the teens depicted.
Chad Steel, an adjunct professor and digital-forensics researcher at George Mason University, recently joined CBS Los Angeles to walk through his latest study on how young people are using generative AI to create explicit images of peers. Steel warned that these tools are now so widespread and simple to use that they are reshaping how teenagers think about consent, privacy, and what is acceptable to share online.
George Mason’s Digital Forensics program lists Steel’s ongoing IRB project, "Generative Artificial Intelligence Application Usage and Self-Generated Sexualized Image Sharing by Teens (STUDY00000257)," which tracks which apps teens are using and the social dynamics behind that behavior. According to George Mason University's Digital Forensics group, the study is part of a broader body of work on technology-facilitated harms affecting young people.
How teens are using 'nudify' tools and image generators
Researchers and reporters say a growing category of so-called "nudify" apps and image generators has made it almost effortless to take a clothed photo and turn it into a photorealistic explicit edit, or to spin up entirely new sexualized images that closely resemble real people. Technology outlets, including Wired, have chronicled how these tools have become more powerful and easier to use, lowering the barrier for abuse. The result is a fresh pathway for harassment, where images can be created for a joke, to humiliate someone, or to pressure them for more photos or money.
Child-safety organizations and investigators say the scope of the problem has surged over the past year, with thousands of cases flowing into reporting hotlines and sparking new policy efforts. Advocacy group Thorn and updates from the National Center for Missing & Exploited Children (NCMEC) describe sharp increases in CyberTipline reports that involve generative AI, and Thorn notes that many teens know someone who has been targeted. Those trends are putting heavy strain on schools, hotlines, and law-enforcement units that handle exploitation and sextortion cases.
What parents and schools can do
Experts urge families to keep calm, preserve evidence, and resist the urge to share or repost any alleged material. Those simple steps can make a big difference for both victims and investigators. Federal guidance points parents toward established resources and reporting systems, including NCMEC’s CyberTipline and local law enforcement, while the Department of Homeland Security’s Know2Protect portal brings together practical advice and reporting options in one place. School leaders are encouraged to treat incidents involving AI-generated sexual imagery as serious safety and Title IX matters, coordinating with counselors, families, and police when needed.
Legal and enforcement response
Lawmakers and prosecutors are scrambling to keep up with the technology. Congressional records show that the ENFORCE Act (S.3021) advanced in the Senate late last year with provisions intended to bolster enforcement against material that depicts child sexual abuse, whether created with AI or through other means. For now, victims and parents must navigate a patchwork of state laws and evolving federal guidance while platforms, advocacy groups, and legislators work on better systems for removal, reporting, and accountability.
For Los Angeles families, the immediate playbook is straightforward: if a child is involved, save original messages and screenshots, contact the school and local law enforcement, and report suspected exploitation to the CyberTipline. Community pressure on schools and platforms, together with emerging research from Steel and other experts, is helping drive better prevention strategies and faster support for the teens caught in the crosshairs.









