Press Release

First-in-the-Nation AI Chatbot Safeguards Signed into Law

SACRAMENTO – Today, California Governor Gavin Newsom signed into law Senate Bill 243, authored by Senator Steve Padilla (D-San Diego). SB 243, the first-of-its-kind in the nation, would require chatbot operators to implement critical, reasonable, and attainable safeguards around interactions with artificial intelligence (AI) chatbots and provide families with a private right to pursue legal actions against noncompliant and negligent developers.

“This technology can be a powerful educational and research tool, but left to their own devices the Tech Industry is incentivized to capture young people’s attention and hold it at the expense of their real world relationships,” said Senator Padilla on the Senate Floor just before the bill’s passage. “These companies have the ability to lead the world in innovation, but it is our responsibility to ensure it doesn’t come at the expense of our children’s health. The safeguards in Senate Bill 243 put real protections into place and will become the bedrock for further regulation as this technology develops.”

The dangers of chatbots have become apparent as stories of disastrous outcomes mount in the media. In Florida last year, a 14-year-old child ended his life after forming a romantic, sexual, and emotional relationship with a chatbot. Social chatbots are marketed as companions to people who are lonely or depressed. However, when 14-year-old Sewell Setzer communicated to his AI companion that he was struggling, the bot was unable to respond with empathy or the resources necessary to ensure Setzer received the help that he needed. Setzer’s mother has initiated legal action against the company that created the chatbot, claiming that not only did the company use addictive design features and inappropriate subject matter to lure in her son, but that the bot encouraged him to “come home” just seconds before he ended his life.

Earlier this year, Senator Padilla held a press conference with Megan Garcia, the mother of Sewell Setzer, in which they called for the passage of SB 243. Ms. Garcia also testified at multiple hearings in support of the bill.

“Today, California has ensured that a companion chatbot will not be able to speak to a child or vulnerable individual about suicide, nor will a chatbot be able to help a person to plan his or her own suicide,” said Ms. Garcia after the bill was signed into law. “Finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots. American families, like mine, are in a battle for the online safety of our children. I would like to thank Senator Padilla and the co-authors of this SB 243 for acting quickly in this changing digital landscape. It is encouraging to have leaders in government who are on the side of American families and not influenced by big tech.”

Sadly, Sewell’s story is not the only tragic example of the harms unregulated chatbots can cause. There have been many troubling examples of how AI chatbots’ interactions can prove dangerous.

In August, after learning of the tragic story of Adam Raine, the California teen that ended his life after being allegedly encouraged to by ChatGPT, California State Senator Steve Padilla (D-San Diego), penned a letter to every member of the California State Legislature, reemphasizing the importance of safeguards around this powerful technology.

Just recently, the Federal Trade Commission announced it launched an investigation into seven tech companies around potential harms their artificial intelligence chatbots could cause to children and teenagers.

SB 243 would implement common-sense guardrails for companion chatbots, including preventing chatbots from exposing minors to sexual content, requiring notifications and reminders for minors that chatbots are AI-generated, and a disclosure statement that companion chatbots may not be suitable for minor users. This bill would also require operators of a companion chatbot platform to implement a protocol for addressing suicidal ideation, suicide, or self-harm, including but not limited to a notification that refers users to crisis service providers and require annual reporting on the connection between chatbot use and suicidal ideation to help get a more complete picture of how chatbots can impact users’ mental health. Finally, SB 243 would provide a remedy to exercise the rights laid out in the measure via a private right of action.

SB 243 is supported by online safety advocates, academic experts, and has received bipartisan support throughout its journey through the Legislature.

"We have more and more tragic evidence emerging that unregulated emotional companion chatbots targeting minors and other vulnerable populations can have dangerous outcomes,” said Jodi Halpern, MD, PhD, UC Berkeley Professor of Bioethics and Co-Director of the Kavli Center for Ethics, Science and the Public. “There are increasing lawsuits related to suicides in minors and reports of serious harms and related deaths in elders with cognitive impairment, people with OCD and delusional disorders. There are more detailed reports of how companion chatbot companies use techniques to create increasing user engagement which is creating dependency and even addiction in children, youth and other vulnerable populations. Given the solid evidence that in the case of social media addiction, the population risk of suicide for minors went up significantly and given that companion chatbots appear to be equally or even more addictive, we have a public health obligation to protect vulnerable populations and monitor these products for harmful outcomes, especially those related to suicidal actions. I could not be more relieved to learn that the California Legislature has approved and the Governor has now signed the first law in the country to set some guard rails.  I applaud Senator Padilla and his staff for their hard work and foresight in taking this first step protecting our children and the most vulnerable among us by bringing this to fruition."

“We congratulate Senator Padilla and his team and thank Gov. Newsom for enacting SB 243 which creates safeguards and provides additional transparency into the impacts of AI companion chatbots,” said Jai Jaisimha, Co-Founder, Transparency Coalition. “This law is an important first step in protecting kids and others from the emotional harms that result from AI companion chatbots which have been unleashed on the citizens of California without proper safeguards. We look forward to working with Sen. Padilla and others to adapt these regulations as we learn more about the negative impacts of this fast moving technology.”

To learn more about Senate Bill 243 and the dangers chatbots can pose, click here.

Senate Bill 243 passed the Senate with bipartisan support by a vote of 33 to 3 on September 11th, and passed in the Assembly on September 10th with bipartisan support by a vote of 59 to 1. The bill becomes law on January 1st, 2026.

###

Steve Padilla represents the 18th Senate District, which includes the communities of Chula Vista, the Coachella Valley, Imperial Beach, the Imperial Valley, National City, and San Diego. Prior to his election to the Senate in 2022, Senator Padilla was the first person of color ever elected to city office in Chula Vista, the first Latino Mayor, and the first openly LGBT person to serve or be elected to city office. Website of Senator Steve Padilla: https://sd18.senate.ca.gov/