Author of Nation’s First Chatbot Protections Proposes 4-Year Moratorium on AI Chatbots in Toys
SACRAMENTO – Today, Senator Steve Padilla (D-San Diego) announced he will introduce legislation placing a 4-year moratorium on the sale and manufacturing of toys with artificial intelligence (AI) chatbot capabilities for children under 18. The purpose of this legislation is to allow time for safety regulations to be developed protecting children from dangerous AI interactions.
Senator Padilla is the author of Senate Bill 243, the first-of-its-kind law that requires chatbot operators to implement critical, reasonable, and attainable safeguards around interactions with AI chatbots and provide families with a private right to pursue legal actions against noncompliant and negligent developers.
“Chatbots and other AI tools may become integral parts of our lives in the future, but the dangers they pose now require us to take bold action to protect our children,” said Senator Padilla. “Our safety regulations around this kind of technology are in their infancy and will need to grow as exponentially as the capabilities of this technology does. Pausing the sale of these chatbot integrated toys allows us time to craft the appropriate safety guidelines and framework for these toys to follow. Our children cannot be used as lab rats for Big Tech to experiment on.”
Earlier this year, Mattel, Inc., one of the largest toy makers in the world, announced a partnership with OpenAI to support AI-powered products. Just last month, the US Public Interest Group Education Fund (PIRG) published the findings of a study it conducted on the safety of several AI toys. In testing, PIRG found that the toys could engage in conversations that were not age appropriate, including how to use matches to start a fire and topics of a sexual nature. The report also cites OpenAI’s own policy that says ChatGPT “is not meant for children under 13” and “may produce output that is not appropriate for… all ages.”
The dangers of chatbots have become apparent as stories of disastrous outcomes mount in the media. In Florida last year, a 14-year-old child ended his life after forming a romantic, sexual, and emotional relationship with a chatbot. Social chatbots are marketed as companions to people who are lonely or depressed. However, when 14-year-old Sewell Setzer communicated to his AI companion that he was struggling, the bot was unable to respond with empathy or the resources necessary to ensure Setzer received the help that he needed. Setzer’s mother has initiated legal action against the company that created the chatbot, claiming that not only did the company use addictive design features and inappropriate subject matter to lure in her son, but that the bot encouraged him to “come home” just seconds before he ended his life.
Last year, Senator Padilla held a press conference with Megan Garcia, the mother of Sewell Setzer, in which they called for the passage of SB 243. Ms. Garcia also testified at multiple hearings in support of the bill.
Sadly, Sewell’s story is not the only tragic example of the harms unregulated chatbots can cause. There have been many troubling examples of how AI chatbots’ interactions can prove dangerous.
In August, after learning of the tragic story of Adam Raine, the California teen that ended his life after being allegedly encouraged to by ChatGPT, California State Senator Steve Padilla (D-San Diego), penned a letter to every member of the California State Legislature, reemphasizing the importance of safeguards around this powerful technology.
Last year, the Federal Trade Commission announced it launched an investigation into seven tech companies around potential harms their artificial intelligence chatbots could cause to children and teenagers.
The proposed legislation (language attached) would place a moratorium on the sale and manufacturing of toys with AI chatbot capabilities for children under 18 for a period of four years, limiting access to dangerous technology marketed exclusively for children. safety, and accountability in chatbot usage.
Last month, Senator Padilla introduced legislation that would further protections surrounding chatbots for children and other vulnerable users by:
- Bring age verification protocol in line with California’s landmark law, requiring chatbot operators to adhere to a stricter standard
- Require operators to prevent chatbots from producing or facilitating the exchange of any sexually explicit material or proposing any sexually explicit content in interactions with minors
To learn more about Senate Bill 243 and the dangers chatbots can pose, click here.
The bill will be introduced and receive a bill number when the Senate gavels into session Monday, January 5th and will be heard in the Senate in the following months.
###
Steve Padilla represents the 18th Senate District, which includes the communities of Chula Vista, the Coachella Valley, Imperial Beach, the Imperial Valley, National City, and San Diego. Prior to his election to the Senate in 2022, Senator Padilla was the first person of color ever elected to city office in Chula Vista, the first Latino Mayor, and the first openly LGBT person to serve or be elected to city office. Website of Senator Steve Padilla: https://sd18.senate.ca.gov/