Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnSquareMore
California is the first state to introduce regulations for AI companion chatbots

California is the first state to introduce regulations for AI companion chatbots

Bitget-RWA2025/10/13 16:09
By:Bitget-RWA

On Monday, California Governor Gavin Newsom approved a groundbreaking law that sets regulations for AI companion chatbots, making California the first state to mandate that operators of such AI systems establish safety measures for these digital companions.

Known as SB 243, the legislation aims to shield children and at-risk individuals from potential dangers linked to AI companion chatbot usage. It places legal responsibility on companies—from major players like Meta and OpenAI to specialized startups such as Character AI and Replika—if their chatbots do not comply with the law’s requirements.

SB 243 was put forward in January by state senators Steve Padilla and Josh Becker, gaining traction after the tragic suicide of teenager Adam Raine, who had engaged in prolonged suicidal conversations with OpenAI’s ChatGPT. The bill also addresses concerns raised by leaked internal documents indicating that Meta’s chatbots had participated in “romantic” and “sensual” discussions with minors. More recently, a family in Colorado filed a lawsuit against Character AI after their 13-year-old daughter died by suicide following a series of troubling and sexualized exchanges with the company’s chatbots.

“Innovative technologies like chatbots and social media can inspire, educate, and connect people—but without proper safeguards, they can also be misused to exploit, deceive, and put our children at risk,” Newsom stated. “We have witnessed devastating incidents where young people have been harmed by unregulated technology, and we refuse to allow companies to operate without essential boundaries and accountability. California can continue to lead in AI and tech, but we must do so responsibly—ensuring our children are protected at every turn. The safety of our youth is not negotiable.”

The new law, which takes effect on January 1, 2026, will require companies to introduce features like age checks and warnings related to social media and companion chatbots. It also enforces harsher penalties for those profiting from illegal deepfakes, with fines reaching up to $250,000 per violation. Additionally, companies must create procedures for addressing suicide and self-harm, sharing these protocols and relevant statistics with the state’s Department of Public Health, including data on how users received crisis intervention alerts.

According to the bill, platforms must clearly indicate that all interactions are generated by AI, and chatbots are prohibited from posing as medical professionals. Companies are also obligated to provide break reminders to minors and block access to sexually explicit images produced by chatbots.

Some companies have already started rolling out child-focused safety features. For instance, OpenAI has recently introduced parental controls, content filters, and a system to detect self-harm for young ChatGPT users. Character AI has stated that its chatbot displays a disclaimer clarifying that all conversations are AI-generated and fictional.

Senator Padilla told TechCrunch that the bill represents “a positive move” toward establishing necessary safeguards for “a tremendously influential technology.”

“We must act swiftly to seize opportunities before they vanish,” Padilla said. “I hope other states recognize the risks. Many already do. This is a nationwide conversation, and I encourage others to take meaningful steps. The federal government has yet to act, so it’s our duty to protect those most at risk.”

SB 243 is the second major piece of AI legislation to emerge from California in recent weeks. On September 29, Governor Newsom enacted SB 53, which sets new transparency standards for large AI companies. This law requires major AI labs—including OpenAI, Anthropic, Meta, and Google DeepMind—to disclose their safety practices and provides whistleblower protections for their employees.

Other states, such as Illinois, Nevada, and Utah, have enacted laws to limit or outright ban the use of AI chatbots as replacements for licensed mental health professionals.

TechCrunch has contacted Character AI, Meta, OpenAI, and Replika for their responses.

This story has been updated to include comments from Senator Padilla.

0
0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!

You may also like

The Rise of Dynamic Clean Energy Markets

- CleanTrade’s CFTC-approved SEF platform in 2025 revolutionizes clean energy markets by enabling transparent trading of VPPAs, PPAs, and RECs. - The platform addresses institutional barriers through standardized protocols, reducing counterparty risk and aligning with traditional energy exchange frameworks. - Within two months, it facilitated $16B in notional value, offering real-time liquidity and ESG-aligned tools to track carbon impact and prevent greenwashing. - Institutional investors now access diver

Bitget-RWA2025/12/11 21:28
The Rise of Dynamic Clean Energy Markets

KITE Price Forecast Post-Listing: Understanding Institutional Attitudes and Market Fluctuations

- KITE's post-IPO price fell 63% by Nov 2025 amid divergent institutional strategies and retail sector uncertainty. - Analysts split between "Buy" ($30 target) and "Hold" ratings, citing operational gains vs. macro risks like the $3.4T deficit bill. - Q3 net loss (-$0.07 EPS) and 5,400% payout ratio highlight structural risks despite industrial real estate pivots and 7.4% dividend hike. - Institutional trading directly impacted price swings, with COHEN & STEERS' stake increase briefly stabilizing shares be

Bitget-RWA2025/12/11 21:10

The Financial Wellness Factor: An Overlooked Driver of Sustainable Wealth Building

- Financial wellness combines objective financial health and subjective well-being to drive long-term wealth creation, beyond mere asset accumulation. - Behavioral traits like conscientiousness correlate with disciplined investing habits, while neuroticism increases impulsive decisions during market volatility. - Studies show financially literate investors maintain portfolios during downturns, with 38% of "content" quadrant participants achieving superior risk-adjusted returns. - Debt management and saving

Bitget-RWA2025/12/11 19:58
The Financial Wellness Factor: An Overlooked Driver of Sustainable Wealth Building

Aster DEX’s Latest Protocol Update and What It Means for DeFi Liquidity Providers

- Aster DEX's v2 upgrade enhances capital efficiency and privacy for liquidity providers (LPs) through ASTER token collateral and leveraged trading features. - The "Trade & Earn" functionality boosted TVL to $2.18B by November 2025, leveraging yield-bearing assets as trading margin. - However, 300x leverage and reduced tick sizes increase liquidation risks during volatility, while fee stagnation below $20M contrasts with $2B daily trading volumes. - Upcoming Aster Chain's privacy tools aim to attract insti

Bitget-RWA2025/12/11 19:58
Aster DEX’s Latest Protocol Update and What It Means for DeFi Liquidity Providers
© 2025 Bitget