California is advancing significant AI regulations. SB 243, a proposal designed to oversee AI companion chatbots and safeguard young people and other at-risk groups, has successfully passed both legislative chambers with bipartisan backing and is now awaiting Governor Gavin Newsom's decision.
Governor Newsom must decide by October 12 whether to sign the bill into law or veto it. Should he approve it, the legislation would be enacted on January 1, 2026, positioning California as the first state to demand that AI chatbot providers establish safety measures for AI companions and making companies responsible if their chatbots do not comply with these rules.
This legislation is intended to stop companion chatbots — defined by law as artificial intelligence systems that generate realistic, adaptive responses to fulfill users’ social needs — from participating in conversations about topics such as suicidal thoughts, self-injury, or explicit sexual content.
Platforms would be required to send periodic notifications to users — every three hours for those under 18 — reminding them that they are interacting with an AI chatbot rather than a human and advising them to take breaks. The law also sets annual disclosure and transparency obligations for AI companies offering companion chatbots, including notable firms like OpenAI, Character.AI, and Replika, with these measures starting July 1, 2027.
Additionally, the California bill would let individuals who believe they have been harmed by violations file legal actions against AI companies, seeking court orders, up to $1,000 in damages for each violation, and reimbursement for legal costs.
SB 243 was put forward in January by state senators Steve Padilla and Josh Becker. The measure gained traction in California’s legislature after the suicide of teenager Adam Raine, who took his own life following extended conversations with OpenAI’s ChatGPT that included discussions about suicide and self-harm. The bill also addresses concerns raised by leaked internal reports alleging that Meta’s chatbots engaged in “romantic” and “sensual” conversations with minors.
Recently, U.S. officials and regulators have increased their attention on AI platforms’ efforts to protect young users. The Federal Trade Commission is preparing to examine the impact of AI chatbots on children’s mental health. Texas Attorney General Ken Paxton has started investigations into Meta and Character.AI, alleging they misled young people with claims about mental health benefits. Meanwhile, Senators Josh Hawley (R-MO) and Ed Markey (D-MA) have launched separate inquiries into Meta’s practices.
“The potential for harm is significant, so we must act swiftly,” Padilla told TechCrunch. “We can introduce sensible protections to ensure that minors, in particular, are aware they're not speaking with a human, that platforms direct users to appropriate resources when they express distress or self-harm intentions, and that children are not exposed to unsuitable material.”
Padilla also emphasized the need for AI companies to disclose how often they refer users to crisis support services annually, “so we can better understand how frequent these incidents are, instead of only learning about them when someone is harmed or worse.”
Earlier drafts of SB 243 contained stricter provisions, but many were relaxed through amendments. For instance, the original bill would have required operators to prevent chatbots from using “variable reward” systems or other features that promote excessive use. These techniques, employed by companies like Replika and Character.AI, offer special messages, memories, story elements, or unlock new responses and personalities, contributing to what critics call a potentially addictive feedback loop.
The current form of the bill no longer includes requirements for operators to track and report how frequently chatbots initiate discussions about suicidal thoughts or actions with users.
“I believe this approach addresses the real issues without imposing unworkable demands on companies, whether due to technical limitations or unnecessary administrative burden,” Becker told TechCrunch.
SB 243 is advancing during a period when Silicon Valley firms are investing heavily in pro-AI political action committees to support candidates in the upcoming midterms who prefer minimal AI regulation.
This legislation is also being considered as California debates another AI safety proposal, SB 53, which would require extensive transparency reporting. OpenAI has sent an open letter to Governor Newsom urging him to reject SB 53 in favor of more lenient federal and international standards. Leading technology companies like Meta, Google, and Amazon also oppose SB 53, while only Anthropic has publicly endorsed it.
“I don’t accept the idea that innovation and regulation are mutually exclusive,” said Padilla. “Don’t tell me we can’t multitask. It’s possible to encourage beneficial innovation and development—this technology clearly has advantages—while also putting in place reasonable protections for those most at risk.”
“We are keeping a close watch on the evolving legislative and regulatory environment, and we look forward to working alongside policymakers as they draft rules for this new field,” a Character.AI representative told TechCrunch, adding that the company already prominently notifies users throughout the chat experience that it is fictional.
A Meta spokesperson declined to provide a statement.
TechCrunch has contacted OpenAI, Anthropic, and Replika for their responses.