Outline:
A New Approach to AI Regulation in California
In a significant shift, Common Sense Media, a prominent kids safety advocate, and OpenAI, the company behind ChatGPT, have joined forces to support a new ballot measure aimed at protecting children from potential risks associated with companion chatbots. This initiative marks a departure from their previous plans, where they had intended to present competing proposals to voters.
The merged measure, known as the Parents & Kids Safe AI Act, introduces several key provisions designed to enhance child safety online. Among these are requirements for chatbot developers to implement technology that estimates a user’s age range and applies protective settings for individuals under 18. Additionally, AI systems would need to undergo independent audits to identify child safety risks and report them to the California attorney general.
Another critical component of the measure is the prohibition of child-targeted advertising and the sale or sharing of children’s data without parental consent. The act also aims to prevent manipulation through emotional dependency by restricting AI systems from promoting isolation from family or friends, simulating romantic relationships with children, or claiming to be sentient.
A spokesperson for Common Sense Media stated that the measure was filed on Thursday afternoon. Although it is not yet visible on the attorney general’s website, a copy obtained by the press reveals the details of the proposal. Notably, the combined measure removes certain elements from the original initiative, such as a ban on student smartphones in K-12 schools and a prohibition on minors using chatbots capable of engaging in erotic or sexually explicit talk.
To qualify for the ballot, the initiative must collect 874,641 signatures by June 25. The California Secretary of State, Shirley Weber, will determine whether the measure meets this threshold.
Common Sense Media initially proposed its own ballot initiative, the California Kids AI Safety Act, last fall. This came after Governor Gavin Newsom vetoed a similar bill authored by the nonprofit. In response, OpenAI introduced a competing ballot measure in December 2025, which mirrored a bill signed into law by Newsom in October. This law required companion chatbot providers to implement a suicidal ideation protocol and inform users every three hours that they are interacting with AI. Critics argued that this move was manipulative and aimed at undermining stronger protections for children.
Research by Common Sense Media has shown that seven out of ten teens have used companion chatbots, highlighting the potential dangers of these technologies for minors. The organization warned that without action, the tech could lead to increased harm and addiction among young people. In one high-profile case, the parents of a California teen, Adam Raine, sued OpenAI, alleging that Raine was coached by ChatGPT to commit suicide.
OpenAI’s willingness to compromise contrasts with the actions of tech companies during a policy fight in 2020. At that time, major gig economy players like DoorDash, Instacart, Lyft, and Uber spent $200 million to support a successful ballot initiative, Proposition 22, which exempted them from providing full employment benefits to drivers.
Senator Steve Padilla, a Democrat from Chula Vista, praised the merged ballot measure as a significant breakthrough. However, he expressed concerns about the matter being handled directly by voters rather than lawmakers and the governor. He argued that amending the state constitution would create an unnecessarily high barrier for future revisions and that legislative hearings would allow broader public input on this important issue.
In recent weeks, Padilla has proposed a bill with a four-year moratorium on the sale of toys containing companion chatbots. OpenAI has entered a partnership with Mattel, the maker of Barbie, but has yet to produce any products.
OpenAI’s efforts in California extend beyond children’s online safety. One proposed ballot initiative would grant a state commission the authority to slow or halt AI model development if members suspect catastrophic risks to Californians. Two other proposals target corporate conversions from nonprofit to for-profit status, as OpenAI has planned. These initiatives would require nonprofits that restructure to dedicate all their assets to the public benefit of humanity. To achieve this, the initiatives would establish a commission with the power to shut down AI models and host competitions inviting the public to propose ways AI can help humanity. Under one of the initiatives, the commission would also have the power to revoke nonprofit conversions.
Founded about a decade ago, OpenAI was established with a charter to benefit humanity. Its plans to convert to a public benefit corporation led to criticism from nonprofits and scrutiny by attorneys general in California and Delaware. Both states eventually reached agreements with OpenAI to allow the restructuring after the company agreed to place roughly 25% of its assets into a nonprofit.
For the record: A previous version of this article provided an incorrect figure for the number of signatures required for the initiative to qualify for the ballot.
