You’ve Got A Friend In Me: AI Toys, Young Children, and Potential Hazards

By Aaron Spitler

For toymakers concerned with engaging the next generation of children, artificial intelligence (AI) could prove to be their trump card. To these companies, AI allows for a degree of personalization and interactivity that was previously not feasible. In a press release announcing their partnership with Open AI, industry giant Mattel emphasized their desire to unlock “the magic of AI to age-appropriate play experiences.” Other companies are already integrating chatbots powered via large language models (LLMs) into dolls and stuffed animals, transforming playthings into companions who can converse, empathize, and bond with users. Given these tantalizing possibilities, toymakers see investment in AI as a way to gain an edge on the competition.

Despite the hype, critics concerned with how AI affects children feel these “smart toys” offer more risks than rewards. Many AI toys rely on audio provided by children to function, yet privacy laws that safeguard minors’ sensitive information have not kept pace with the speed of innovation. The issues go beyond data protection. Parents worry AI will distort their kids’ ability to recognize and build genuine relationships. Policies are needed not only to assuage adults’ anxieties but also shield children’s well-being at an important time in their socio-emotional development.

Compromising Data Privacy

AI toys need audio data to become convincing conversational partners. While this feature may be exciting for kids, parents question how toymakers handle their children’s personal information. A recent report from the U.S. Public Interest Research Group (PIRG) Education Fund highlighted the dangers posed by AI-enabled playthings. According to the watchdog organization, toy companies with these products currently on the market work with third party contractors for transcription services. The level of data protections tends to vary by vendor, raising the risk that minors’ audio could fall into the hands of bad actors. For instance, scammers could make a clone of a voice clip to extort parents, making them believe their children are in harm’s way when demanding money. Policymakers must be mindful of how this delicate information can be purposefully misused, crafting regulations for how kids’ audio is both managed and shared.

Officials are also ill-equipped to deal with the privacy-related challenges that AI toys present. Tech Policy Press reported that existing U.S. laws on children’s data protection have not accounted for AI’s rapid advancements. Few legislators could have anticipated the emergence of physical products that depend on user data to partake in free-flowing conversation. Beyond broadening their conception of what AI can achieve, officials must consider the web of actors who gain access to minors’ data through these innovative products. Regulators must act swiftly to protect consumers from AI toys already on the shelves today, all while establishing guardrails designed for the AI toys of tomorrow.

Complicating Childhood Development

Experts in early childhood development warn that objects which behave like humans may confuse kids learning to distinguish what is real and fake. Impressionable minds may interpret the “organic” responses they receive from their toys as a sign of self-awareness. While an AI toy may seem highly attentive, young children in the process of refining their reasoning skills may not recognize a programmed reaction from a personality quirk. Introducing seemingly sentient playthings into children’ s lives at this developmentally critical stage may be misguided at best and detrimental at worst.

The congenial, if not ingratiating, nature of AI toys may also inadvertently mislead children about what they can expect from “real life” friendships. Kids may grow accustomed to the frictionless camaraderie offered by these agreeable playthings, and may even prefer it over the frustrating and complex relationships they have with their peers. As a result, when synthetic relationships win out over human connection, younger users have fewer opportunities to practice navigating uncomfortable social situations. In the long run, this dynamic may set children up for failure, as they are given neither the skills nor experience to forge lasting bonds with those around them.

Not All Fun And Games

Policymakers can take steps to ensure children’ s needs are not minimized in conversations regarding these products. International standards organizations could offer clear guidance on the appropriate use of AI-powered products that mimic empathetic behavior. Establishing best practices now will be crucial as more toy companies enter this burgeoning market. Moreover, lawmakers could update data privacy statutes to accurately reflect the shifting technological landscape. An adaptive approach will be necessary given the speed at which AI chatbots have evolved. However, in the absence of regulations governing these products, it is up to parents to take action. Lobbying companies for greater transparency into how they manage their children’s data, as well as pressing officials to craft legislation addressing the broader risks they present to minors, may allow families to better advocate for young users. In adopting these measures, playtime can be safer (and more rewarding) for children curious about what AI has to offer.


Authors’ Bio

Aaron Spitler is a researcher whose interests lie at the intersection of human rights and digital technologies. He has worked with numerous organizations in the technology policy space, including the Internet Society, the International Telecommunication Union, and Harvard University’s Berkman Klein Center. He is passionate about how technology, when used responsibly, can be a force for good.

Edited by: Madeleine Gibbons-Shapiro, MPP’27 /Emma Schwartz, MPP ’27