There’s a new way to get rich online, at least according to a spate of YouTube tutorials highlighting the money it’s possible to make using artificial intelligence (AI) to generate kids’ videos. When looking for how to create children’s content or channels on this platform, tutorials currently appear that offer roadmaps to produce simple animations in a few hours. They recommend using tools like ChatGPT, speech synthesis services ElevenLabs and Murf AI, and Adobe Express’ generative AI features to automate scripting and audio and video production.
“IT’S NOT DIFFICULT,” indicates one of the thumbnail images of the first results, while the title of another promises that it is possible to produce a video with an original children’s song “in less than 20 minutes!” The virality-fueled riches supposedly on offer become impressive: “$1.2 million with children’s videos generated by artificial intelligence?” suggests one title, while another proclaims “$50,000 a MONTH!”
The dangerous allure of AI-generated videos
With YouTube being the most dominant force in young children’s entertainment, if AI-generated children’s videos achieve even a fraction of the success suggested in the tutorials, millions of children will watch them. Last year, the BBC investigated the rise of “bad science” content aimed at older children on YouTube and identified more than 50 channels that used AI to promote pseudoscience and conspiracy theories, often racking up millions of views. But animated videos aimed at younger children have hardly been studied in depth.
WIRED located several accounts that appeared to offer AI-generated children’s content, primarily by searching for relatively new channels that published a high volume of videos. Counterfeit detection startup Reality Defender analyzed samples from several of these channels and found evidence that generative AI was part of the production process. “Some of the videos we analyzed have a combination of scripts or voices probably generated [by AI], or a combination of both, demonstrating that generative text-to-speech is increasingly common in YouTube videos, even for children, apparently,” highlights Ben Colman, CEO of Reality Defender.
A channel called “ Yes! Neo ” has over 970,000 subscribers and his videos regularly exceed one million views. Since its release in November 2023, it has regularly released new material every few days, with titles like “ Ouch! Baby Got a Boo Boo ” and “ Poo Poo Song ”; (Poop is an enduring fascination of children on YouTube and music streaming services .) Reality Defender analyzed the script transcribed from a sample video, “ Caring for Injured Baby Dino,” and found that it had been generated by an AI with a 98% probability.
The “ Super Crazy Kids ” channel, produced by a company in Hyderabad, India, also appears to incorporate AI tools in the production of its most recent animated videos. It has more than 11 million subscribers. Reality Defender examined a sample of the material and found “fragments of synthetic voice.” Additionally, the title of the video is a jumble of keywords in English that would be translated into Spanish as: “Piggy Finger Family Song Nursery Rhymes for Babies Colored Cars for Children 45 Minute Collection Video.” The channel promotes itself as educational and often labels its content as resources for learning colors, shapes, and numbers.
Few limits on YouTube for AI
Forks! Neo , Super Crazy Kids and other similar channels share a common aspect: they feature 3D animation in a style similar to that of “Cocomelon,” the most popular children’s channel on YouTube in the United States. Dana Steiner, a spokesperson for Cocomelon’s parent company, Moonbug, states that none of its programs currently use AI, “but our talented creative team is always exploring new tools and technologies.”
This familiar aesthetic means that a busy parent taking a glance at a screen may mistake the AI material for a program they’ve previously reviewed. Although not particularly well-crafted, the content of the videos broadcast by these channels tends to be of poor quality, in the same way that much of today’s man-made children’s entertainment offerings are of poor quality: frenetic, flashy, and unoriginal.
YouTube is introducing new policies for AI-generated content, although the company does not intend to significantly restrict it: “YouTube will soon add content labels and disclosure requirements for creators who upload videos that involve realistic synthetic or altered material, including material directed at children and families,” says Elena Hernandez, YouTube representative.
When asked by WIRED whether YouTube will proactively track AI-generated content and label it as such, Hernandez clarified that more details will be provided later, but that it plans to rely primarily on voluntary disclosure. “Our primary focus will be to require creators themselves to disclose when they have created altered or synthetic content that is realistic.” The company says it uses a combination of automated filters, human review, and user feedback to determine what content is acceptable on YouTube Kids, the platform’s most restricted service.
Some fear that YouTube and parents around the world are not adequately prepared for the next wave of AI-generated children’s content. Neuroscientist Erik Hoel recently looked at some of the tutorials on creating children’s content with AI, as well as some videos that he suspected were made using the technology. Hoel was so baffled by what he saw that he lashed out at the concept on his Substack, even mentioning Super Crazy Kids. “Across the country, young children are sitting in front of iPads, subjected to a synthetic flow, deprived of human contact even in the content they consume.
A battle to attract algorithms with AI
Some of the channels with more obscure AI videos are already venturing into strange territory. The channel “ Brain Nursey Egg TV ,” for example, gives disturbing content names like “ Cars for Kids. The plotless video is an amalgam of visual effects like floating eyeballs and melting blocks of color. The soundtrack includes children clapping, a robotic voice counting, some laughing babies, and various synthetic voices intoning the word “YouTube” at seemingly random intervals. “This has AI-generated voices throughout and is either driven by a script created by that technology, or it is one of the greatest and most underrated works of surrealist video art in recent times,” highlights Reality Defender’s Colman. In any case, this type of content hasn’t gotten much attention yet: some of the channel’s videos only have a handful of views. Brain Nursery Egg TV does not provide an email address or any other way to contact those responsible for the channel.
An AI-generated live imitation of SpongeBob SquarePants called “ AISponge ” reveals that it is an art project using said technology. Although it is inspired by a children’s program, it requests stories from its audience, which tends to offer themes clearly for adults. One of the episodes analyzed by WIRED revolved around labor unrest at the Krusty Krab fast food restaurant; Several characters were outraged by the low salaries paid by Mr. Krab. In another, SpongeBob carefully instructs his friend Patrick, a starfish, on how to shave his testicles: “Make a downward motion.”
Some mainstream children’s shows on YouTube have openly embraced AI. Kartoon Studios, formerly Genius Brands International, promotes the use of artificial intelligence in children’s shows Kidaverse Fun Facts and Warren Buffett’s Secret Millionaires Club, both available on the platform.
Outside of YouTube, other prominent children’s content creators are also experimenting with this technology. PBS Kids, a standard-bearer for high-quality children’s entertainment, explores the use of AI to create interactive digital episodes of shows like Lyla in the Loop and Elinor Wonders Why . But this project will not use generative technology to make the materials. “This AI is used to interpret user responses and help guide the characters’ pre-set reactions in the episodes,” highlights Lubna Abdullah, director of PBS Kids Communications.
Artificial intelligence tools, when used thoughtfully and deliberately, are a great help. Abuulbah cites research indicating that AI-driven interactive programs are powerful learning tools. David Bickham, research director of the Digital Wellbeing Lab at Boston Children’s Hospital, says the kind of app PBS Kids is developing is “really promising.” But he believes the widespread rush to get rich quickly on AI children’s content opens the floodgates to a new wave of garbage. “Something that is created entirely to attract attention, don’t expect it to have any beneficial educational or positive impact.”
Bad children’s programming already existed on YouTube before the rise of AI, and before that on television. The main threat created by generative AI tools may simply be facilitating the production of poor-quality children’s programs at an accelerated pace, similar to the impact of technology on web content.
“The formula for creating the best material is a painstaking process,” Bickham explains, noting that shows like Sesame Street meticulously craft planned lessons with real children before bringing them to television. YouTube’s bombardment of AI-improvised children’s shows takes the opposite approach, concerned with performance, algorithmic amplification, and monetization, rather than truly enriching children’s lives.
What’s more, since AI tools automate the process of creating a show, channels focused on publishing as many videos as possible may not even keep an eye on their production before others see it. “There will be no way to necessarily know how much AI-generated content is reviewed by humans before it is put somewhere,” observes Tracy Pizzo Frey, senior AI advisor for the nonprofit Common Sense Media, which focuses on media literacy.
YouTube’s upcoming policies appear to allow children’s channels to publish AI content without a single pair of human eyes reviewing it, as long as it is indicated that it was created by this technology. And bad actors may simply choose not to label their videos and see how long they can get away with providing children with unverified robotic content.
Pizzo Frey believes that both creators and platforms like YouTube must be responsible for what they offer to children. “Proper human oversight, especially of generative AI, is extremely important. That responsibility should not fall entirely on the shoulders of families.