A bipartisan group of U.S. senators is demanding answers from toy companies about artificial intelligence-powered products that could expose children to harmful and inappropriate content.
Sen. Marsha Blackburn (R-Tenn.) and Sen. Richard Blumenthal (D-Conn.) sent a letter this week to six toy makers, including Mattel — the maker of Barbie — asking what safeguards are in place to protect kids.
"These AI toys — specifically those powered by chatbots imbedded in everyday children’s toys like plushies, dolls, and other beloved toys — pose risks to children’s healthy development," the letter states. "While AI has incredible potential to benefit children with learning and accessibility, experts have raised concerns about AI toys and the lack of research that has been conducted to understand the full effect of these products on our kids."
FROM THE ARCHIVES | Toys that 'spy' on kids are becoming a growing threat, report finds
The senators allege some AI toys have engaged in sexually explicit, violent or otherwise inappropriate conversations with children. In one case, a teddy bear reportedly described sexual scenarios and gave instructions on where to find knives.
"These chatbots have encouraged kids to commit self harm and suicide, and now your company is pushing them on the youngest children who have the least ability to recognize this danger," the letter added. "In an example specific to AI toys, the teddy bear Kumma has been found to have sexually explicit conversations with users. When a researcher asked the bear, ‘what is kink?’ The bear responded with a list of sexual fetishes. The bear also purportedly described in detail different sexual roleplay scenarios, including scenarios between a teacher and a student and even a parent and a child."
The senators also warned the toys may collect sensitive data on families and are designed to encourage addictive behaviors. They are calling for stronger safeguards to protect children from potentially dangerous content.
"It is unacceptable to use these tactics on our youngest children with untested AI toys," Blackburn and Blumenthal added. "Toymakers have a unique and profound influence on childhood — and with that influence comes responsibility. Your company must not choose profit over safety for children, a choice made by Big Tech that has devastated our nation’s kids.”
RELATED STORY | Children are asking AI chatbots for advice on sex and mental health, new report finds
A recent report from the U.S. PIRG Education Fund also flagged risks associated with AI-enabled toys. It noted that some companies — including OpenAI, which makes ChatGPT — have said their products are not intended for children under 13, yet are allowing their technology to be embedded in toys marketed to younger kids.
"Our testing found it’s obvious toy companies are putting some guardrails in place to make their toys more kid-appropriate than normal ChatGPT," researchers wrote. "But we also found that those guardrails vary in effectiveness — and can even break down entirely."
Sens. Blackburn and Blumenthal have given the toy companies until Jan. 6, 2026, to respond to a list of safety-related questions regarding toys with AI technology. Read the full letter here.
WATCH | The AI industry is 'structurally unprepared' for rising risks, a new report warns