Press Release

Author of Landmark Chatbot Safeguards Condemns Trump’s Latest Lawless Effort to Meddle in California’s Fight to Protect Children from Dangerous Sexual AI Content

SACRAMENTO – State Senator Steve Padilla (D-San Diego) released the following statement today in response to reports that the Trump White House intends to release an executive order that would bar states from passing AI regulations:

Let’s be clear, this press release has no legal bearing on California law. Trump is not our king and he cannot simply wave a pen to unilaterally invalidate state law.

It is curious that he has decided to interfere with our efforts to protect children from dangerous sexual content being marketed directly towards kids. Is it a coincidence this executive order comes on the heels of this week’s dinner at the White House full of billionaires and tech CEOs?

More and more we are learning of the dangers of AI as the technology evolves. AI chatbots have encouraged several children and vulnerable users to take their own lives or harm others. Yet, in light of this evidence, the White House would rather cave to the whims of billionaire tech CEOs, leaving our kids without any safeguards protecting them from unregulated AI models that have already claimed too many lives.

Rather than using California’s thoughtful framework to create national protections, this administration has chosen to let tech companies police themselves - something they have proven time and time again they are incapable of doing.  Senate Bill 243, my bill that was just signed into law, would finally provide families with legal recourse against harms caused by these chatbots.

Government’s most sacred charge is protecting our children and the most vulnerable among us – this executive order is nothing but a disgraceful and complete dereliction of that duty.”

 

Senator Steve Padilla is the author of Senate Bill 243, the first-of-its-kind law in the nation that requires chatbot operators to implement critical, reasonable, and attainable safeguards around interactions with artificial intelligence (AI) chatbots and provide families with a private right to pursue legal actions against noncompliant and negligent developers.

The dangers of chatbots have become apparent as stories of disastrous outcomes mount in the media. In Florida last year, a 14-year-old child ended his life after forming a romantic, sexual, and emotional relationship with a chatbot. Social chatbots are marketed as companions to people who are lonely or depressed. However, when 14-year-old Sewell Setzer communicated to his AI companion that he was struggling, the bot was unable to respond with empathy or the resources necessary to ensure Setzer received the help that he needed. Setzer’s mother has initiated legal action against the company that created the chatbot, claiming that not only did the company use addictive design features and inappropriate subject matter to lure in her son, but that the bot encouraged him to “come home” just seconds before he ended his life.

Earlier this year, Senator Padilla held a press conference with Megan Garcia, the mother of Sewell Setzer, in which they called for the passage of SB 243. Ms. Garcia also testified at multiple hearings in support of the bill.

Sadly, Sewell’s story is not the only tragic example of the harms unregulated chatbots can cause. There have been many troubling examples of how AI chatbots’ interactions can prove dangerous.

In August, after learning of the tragic story of Adam Raine, the California teen that ended his life after being allegedly encouraged to by ChatGPT, California State Senator Steve Padilla (D-San Diego), penned a letter to every member of the California State Legislature, reemphasizing the importance of safeguards around this powerful technology.

This year, the Federal Trade Commission announced it launched an investigation into seven tech companies around potential harms their artificial intelligence chatbots could cause to children and teenagers.

SB 243 implements common-sense guardrails for companion chatbots, including preventing chatbots from exposing minors to sexual content, requiring notifications and reminders for minors that chatbots are AI-generated, and a disclosure statement that companion chatbots may not be suitable for minor users. This bill would also require operators of a companion chatbot platform to implement a protocol for addressing suicidal ideation, suicide, or self-harm, including but not limited to a notification that refers users to crisis service providers and require annual reporting on the connection between chatbot use and suicidal ideation to help get a more complete picture of how chatbots can impact users’ mental health. Finally, SB 243 provides a remedy to exercise the rights laid out in the measure via a private right of action.

To learn more about Senate Bill 243 and the dangers chatbots can pose, click here.

###

Steve Padilla represents the 18th Senate District, which includes the communities of Chula Vista, the Coachella Valley, Imperial Beach, the Imperial Valley, National City, and San Diego. Prior to his election to the Senate in 2022, Senator Padilla was the first person of color ever elected to city office in Chula Vista, the first Latino Mayor, and the first openly LGBT person to serve or be elected to city office. Website of Senator Steve Padilla: https://sd18.senate.ca.gov/