![](/sites/sd18.senate.ca.gov/files/2025-02/230327-Padilla_Updated_BANNER_12.jpg)
Senator Padilla Introduces Legislation to Protect Children from Predatory Chatbot Practices
SACRAMENTO – Last week, Senator Steve Padilla (D-San Diego) introduced Senate Bill 243, which will require program developers to implement critical safeguards to protect children and other impressionable users from the addictive, isolating, and influential aspects of artificial intelligence (AI) chatbots.
As AI technology continues to develop, sophisticated chatbot services have grown in popularity among users of all ages. Social chatbots, designed to serve as AI companions, have gained millions of users, many of whom are children. However, as the technology is still developing, it leaves the users to serve as the test subjects as developers continue to refine the modeling parameters.
There have been many troubling examples of how AI chatbots’ interactions with children can be dangerous. In 2021, when a 10-year-old girl asked an AI bot for a “fun challenge to do” she was instructed her to “plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.” In 2023, researchers posing as a 13-year-old girl were given instructions on how to lie to her parents to go on a trip with a 31-year-old man and lose her virginity to him. These interactions may seem trivial, but research conducted at the University of Cambridge shows that children are more likely to view AI chatbots as quasi-human and thus trust them more than adults. Thus, when dialog between children and chatbots goes wrong, the consequences can be dire.
In Florida, a 14-year-old child ended his life after forming a romantic, sexual, and emotional relationship with a chatbot. Social chatbots are marketed as companions that are helpful to people who are lonely or depressed. However, when 14-year-old Sewell Setzer communicated to his AI companion that he was struggling, the bot was unable to respond with empathy or the resources necessary to ensure Setzer received the help that he needed. Setzer’s mother has initiated legal action against the company that created the chatbot, claiming that not only did the company use addictive design features and inappropriate subject matter to lure in her son, but that the bot encouraged him to “come home” just seconds before he ended his life. This is yet another horrifying example of how AI developers risk the safety of their users, especially minors, without the proper safeguards in place.
Due to the novel nature of this technology, A.I. chatbots lack the regulation necessary to ensure that vulnerable users such as children are properly protected from the possible dangers that this technology poses. SB 243 would provide clearly necessary safeguards for chatbot platforms to protect users, particularly minors.
“Our children are not lab rats for tech companies to experiment on at the cost of their mental health,” said Senator Padilla. “We need common sense protections for chatbot users to prevent developers from employing strategies that they know to be addictive and predatory.”
SB 243 would require operators to prevent addictive engagement patterns to help prevent users from becoming addicted to the platforms. Additionally, the bill would require a periodic reminder that chatbots are AI-generated and not human. This bill would also require a disclosure statement to warn children and parents that chatbots might not be suitable for minors. Finally, this bill would require annual reporting on the connection between chatbot use and suicidal ideation to help get a more complete idea of how chatbots can impact user’s mental health.
Senate Bill 243 is supported researchers working on the forefront of the intersection of AI and consumers as well as advocates for child welfare.
“We have growing reasons to be concerned about the risks that relational chatbots pose to the health of minors,” said Dr. Jodi Halpern, MD, PhD, UC Berkeley Professor of Bioethics & Co-Director of the Kavli Center for Ethics, Science and the Public. “We would never allow minors to be exposed to products that could harm them without safety testing and guardrails. This is the first bill we are aware of nationally to take an important first step toward creating those guardrails through safety monitoring. We commend Senator Padilla for bringing multiple stakeholders to the table to proactively address this emerging issue."
"The Children's Advocacy Institute at the University of San Diego School of Law applauds Senator Padilla for introducing SB 243, a bill that rightly seeks to prevent the two absolute worst aspects of the AI-chatbot menace to children's safety: children being duped into thinking they are talking to a real person and children being manipulated by profit-at-all-costs Big Tech into becoming, against their will, addictied to spending time with AI fake people not programmed to look out for the best interests of children instead of their real friends and family.” - Ed Howard, Senior Counsel to the Children’s Advocacy Institute, which recently penned an op-ed about the dangers of companion chatbots.
"With nearly half of U.S. adolescents experiencing mental health challenges and suicide ranking as the second leading cause of death for youth aged 10–24, the risks of unregulated AI engagement cannot be ignored,” said Ria Babaria, Mental Health Policy Director at Youth Power Project. “As chatbots become more sophisticated, vulnerable young people may turn to them for reassurance about their struggles, emotions, or even harmful thoughts, without youth recognizing the lack of human oversight and accountability. This bill is a critical step in ensuring AI does not manipulate engagement, promote harmful interactions, or replace real human support. If AI is going to exist in these spaces, we must regulate it because protecting youth mental health needs to come first."
“We are proud to support Senator Padilla in establishing strong guardrails for generative AI chatbot companions, which increasingly shape - and potentially harm - young people's lives,” said Amina Fazlullah, Head of Tech Policy Advocacy for Common Sense Media. “SB 243 includes provisions that will bring much-needed transparency to the risks associated with this technology, and ultimately help the public better understand and address the profound impact that AI-driven, human-like interactions can have on developing minds."
Senate Bill 243 will be heard in the Senate in the coming months.
###
Steve Padilla represents the 18th Senate District, which includes the communities of Chula Vista, the Coachella Valley, Imperial Beach, the Imperial Valley, National City, and San Diego. Prior to his election to the Senate in 2022, Senator Padilla was the first person of color ever elected to city office in Chula Vista, the first Latino Mayor, and the first openly LGBT person to serve or be elected to city office. Website of Senator Steve Padilla: https://sd18.senate.ca.gov/