Los Angeles

Racy AI ‘Pippi’ Cover Shocks LA Classroom, Parents Demand Crackdown On School Tech

AI Assisted Icon
Published on February 27, 2026
Racy AI ‘Pippi’ Cover Shocks LA Classroom, Parents Demand Crackdown On School TechSource: Coolcaesar, CC BY-SA 4.0, via Wikimedia Commons

A fourth-grade reading project in Los Angeles took a hard left turn when a classroom used Adobe Express for Education to generate a book cover for Pippi Longstocking and got adult, sexualized imagery instead of a playful kids’ illustration. The incident has parents, teachers, and district officials suddenly asking how safe image-generating AI really is in elementary schools without much tighter rules.

Parents say a student typed in a prompt describing “long stockings a red headed girl with braids sticking straight out.” Instead of a mischievous child hero, the AI tool returned images of women in lingerie and bikinis. Other parents say they were able to recreate similar results at home on school-issued Chromebooks. Within days, the parent group Schools Beyond Screens was urging the Los Angeles school board to pull the software, and district officials said they were working with the vendor to address the problem, according to CalMatters.

State Guidance And The Law

At the state level, the California Department of Education has already tried to get ahead of scenarios like this. After convening an Artificial Intelligence Working Group of teachers, administrators, and experts, the department released updated guidance titled “Guidance for the Safe and Effective Use of Artificial Intelligence in California Public Schools.”

The document urges districts to favor closed systems for student data, require human review of AI outputs that students will see, and build developmentally appropriate AI literacy into everyday instruction, according to the California Department of Education.

Experts Warn Of Bigger Harms

National researchers say the Pippi Longstocking fiasco is a small window into a much bigger issue. A Brookings Institution report released in January concluded that, at this stage, the risks of generative AI in schools often outweigh the benefits and can undercut students’ foundational learning and social development.

The report recommends redesigning assignments so they are less vulnerable to “AI shortcuts,” preparing teachers to work with and around the tools, and favoring products that “teach, not tell,” according to the Brookings Institution.

Parents Push For Clearer Rules

For many parents and advocates, the Los Angeles incident is proof that voluntary guidance and glossy marketing are not enough to keep kids safe.

“These tech companies are making things marketed to kids that are not fully tested,” parent Jody Hughes said. The state’s AI working group is expected to roll out specific policy recommendations by July, according to CalMatters.

What Districts Can Do Now

While the working group finishes its job, districts are not stuck waiting. Current California Department of Education guidance lays out several immediate moves: require a human to review AI-generated content before it is shown to students, block open-image generators on school-managed devices, and use vendor-vetting checklists that weigh privacy and content-safety protections.

The guidance is nonbinding, so local education agencies still carry the legal and practical responsibility for protecting students, according to the California Department of Education.

What Comes Next

Lawmakers have circled dates on the calendar. Senate Bill 1288 requires the AI working group to develop guidance and deliver a model local policy by July 1, 2026, followed by a final report to the Legislature. That schedule ramps up pressure on both vendors and districts to show how they will prevent harmful outputs and safeguard student data, according to bill text on the California Legislative Information website for SB 1288.

For now, the Pippi Longstocking assignment is a cautionary tale of how fast a classroom AI experiment can go sideways. Parents and educators say they are watching closely for clear opt-out policies, age gates on image generators and enough staff training so that shiny new tools do not replace common-sense adult oversight in a fourth-grade classroom.