Experts Urge Ban on AI ‘Slop’ Flooding YouTube Kids, Backed by 200+ Groups

Experts Urge Ban on AI ‘Slop’ Flooding YouTube Kids, Backed by 200+ Groups

More than 200 child advocates and experts are pressing YouTube to stop low-quality AI videos aimed at children. The call comes in an open letter organized by Fairplay. It was addressed to YouTube CEO Neal Mohan and Google CEO Sundar Pichai. The letter carried signatures from more than 135 organizations.

Signatories include the American Federation of Teachers and the American Counseling Association. Prominent researchers also signed, among them Jonathan Haidt. Fairplay’s Rachel Franz helped lead the campaign.

What advocates mean by AI slop

Advocates use the term “AI slop” for cheaply produced, algorithm-driven videos. These pieces often use repetitive visuals and garbled narration. Filmogaz.com documented the phenomenon in a February investigation.

Franz warned the content targets very young viewers. She said it can hijack attention and distort a child’s view of the world. She spoke to Filmogaz.com about her early childhood development concerns.

Scale and money

Fairplay reported that top channels using AI slop earned more than $4.25 million in annual revenue. Some creators openly promote profits from “plotless, mesmerizing AI content.” The group also estimates only about 5% of videos aimed at kids under eight are high-quality.

Researchers note adults identify AI-generated material correctly only about half the time. Repeated exposure can make people think AI imagery is real. That risks shaping children’s developing ideas of reality.

Demands from advocates

  • Label all AI-generated content across the platform.
  • Ban AI-generated videos from YouTube Kids entirely.
  • Prohibit AI-made “made for kids” content on main YouTube.
  • Stop recommending AI content to users under 18.
  • Add a parental toggle to disable AI content, off by default.
  • End investment in AI-generated children’s content, including Animaj and related projects backed by Google’s AI Futures Fund.

YouTube’s response

A YouTube spokesperson, Boot Bullwinkle, said the company maintains high standards for YouTube Kids. The spokesperson said AI-generated content in the app is limited to a small set of high-quality channels. Parents can also block channels, the statement added.

Bullwinkle said YouTube labels AI content and requires creators to disclose realistic AI use. He also noted the 15 channels named in the Filmogaz.com investigation are not on YouTube Kids. The company said it removed videos that violated child safety rules.

YouTube CEO Neal Mohan flagged managing AI slop as a top priority in his annual letter. The company is developing dedicated AI labels for YouTube Kids, Bullwinkle said. No timeline was provided.

Context and precedent

Advocates cite a 2017 crisis as proof platforms can act at scale. During the Elsagate episode, YouTube removed roughly 150,000 videos and several hundred channels. Franz argued that action shows the company can enforce strict measures when it chooses.

She urged broader, structural fixes. Franz warned that piecemeal removals leave the underlying recommendation system unchanged. Experts urge ban proposals from 200+ groups reflect a growing coalition of researchers, teachers, and clinicians.

The debate now centers on whether platform design and financial incentives can be changed. Advocates say only structural policy shifts will curb the spread of AI slop on YouTube Kids. YouTube says it is evolving its approach as the ecosystem changes.