
Legislation Protecting Against Predatory Chatbot Practices Advances in State Senate
SACRAMENTO – Today, the Senate Judiciary Committee, with bipartisan support, approved Senate Bill 243, which is authored by Senator Steve Padilla (D-San Diego). The bill would require chatbot operators to implement critical safeguards to protect users from the addictive, isolating, and influential aspects of artificial intelligence (AI) chatbots.
As AI technology continues to develop, sophisticated chatbot services have grown in popularity among users of all ages. Social chatbots, designed to serve as AI companions, have gained millions of users, many of whom are children. However, as the technology is still developing, it leaves the users to serve as the test subjects as developers continue to refine the modeling parameters.
Due to the novel nature of this technology, A.I. chatbots lack the regulation necessary to ensure that vulnerable users such as children are properly protected from the possible dangers this technology poses. SB 243 would provide clearly necessary safeguards for chatbot platforms to protect users, especially minors and other vulnerable users.
“Technological innovation is crucial, but our children cannot be used as guinea pigs to test the safety of new products,” said Senator Padilla. “The stakes are too high to allow vulnerable users to continue to access this technology without proper guardrails in place to ensure transparency, safety, and accountability.”
In Florida, a 14-year-old child ended his life after forming a romantic, sexual, and emotional relationship with a chatbot. Social chatbots are marketed as companions to people who are lonely or depressed. However, when 14-year-old Sewell Setzer communicated to his AI companion that he was struggling, the bot was unable to respond with empathy or the resources necessary to ensure Setzer received the help that he needed. Setzer’s mother has initiated legal action against the company that created the chatbot, claiming that not only did the company use addictive design features and inappropriate subject matter to lure in her son, but that the bot encouraged him to “come home” just seconds before he ended his life. This is yet another horrifying example of how AI developers risk the safety of their users, especially minors, without the proper safeguards in place.
Earlier today, Senator Padilla held a press conference with Megan Garcia, the mother of Sewell Setzer, in which they called for the passage of SB 243. Ms. Garcia also testified at the hearing in support of the bill.
SB 243 would implement common-sense guardrails for companion chatbots, including preventing addictive engagement patterns, requiring notifications and reminders that chatbots are AI-generated, and a disclosure statement that companion chatbots may not be suitable for minor users. This bill would also require operators of a companion chatbot platform to implement a protocol for addressing suicidal ideation, suicide, or self-harm, including but not limited to a notification to the user to refer them to crisis service providers and require annual reporting on the connection between chatbot use and suicidal ideation to help get a more complete picture of how chatbots can impact users’ mental health. Finally, SB 243 would provide a remedy to exercise the rights laid out in the measure via a private right of action.
To learn more about Senate Bill 243 and the dangers chatbots can pose, click here.
Senate Bill 243 passed the Senate Judiciary Committee and now heads to the Senate Health Committee.
###
Steve Padilla represents the 18th Senate District, which includes the communities of Chula Vista, the Coachella Valley, Imperial Beach, the Imperial Valley, National City, and San Diego. Prior to his election to the Senate in 2022, Senator Padilla was the first person of color ever elected to city office in Chula Vista, the first Latino Mayor, and the first openly LGBT person to serve or be elected to city office. Website of Senator Steve Padilla: https://sd18.senate.ca.gov/