The legislation also mandates that platforms maintain a protocol to prevent self-harm content and direct users to crisis services.
If you or a loved one is feeling distressed, call the National Suicide Prevention Lifeline. The crisis center provides free and confidential emotional support 24 hours a day, 7 days a week. Call or text 988 or chat at 988lifeline.org.
The legislation requires a pop-up notification every three hours to remind minor users they are talking to a chatbot and not a person.
SUGGESTED: AI chatbots putting children at risk, internet safety expert says
It also requires a protocol to prevent self-harm content and to refer users to crisis service providers if they show signs of suicidal ideation.
Why you should care:
This law is a direct response to growing safety concerns and recent tragedies.
Lawsuits have been filed alleging that chatbots coached young users to harm themselves, with one wrongful-death lawsuit filed against Character.AI by the mother of a boy who died by suicide.
SUGGESTED:
Research also indicates that chatbots have provided dangerous advice on topics like drugs, eating disorders, and alcohol.
This legislation aims to establish guardrails for a rapidly evolving technology that has largely operated with little oversight.
The backstory:
California is one of several states attempting to regulate AI, which has led to significant lobbying efforts from tech companies.
These companies and their coalitions reportedly spent at least $2.5 million in the first half of the legislative session to fight these measures.
SUGGESTED: ChatGPT could affect your critical thinking skills, study finds
In response to recent incidents, companies like OpenAI and Meta have already made changes to their chatbot responses for teenagers, with new features like parental controls and blocking conversations about self-harm.