OpenAI Collaborates with Kids’ Safety Group on California Ballot Initiative

A New Partnership for AI Regulation in California In a significant development, OpenAI has announced a collaboration with Common Sense Media, a prominent children’s online safety organization. This partnership marks a shift from their previous adversarial stance and aims to create a ballot initiative that could allow California voters to decide on new rules for […]

A New Partnership for AI Regulation in California

In a significant development, OpenAI has announced a collaboration with Common Sense Media, a prominent children’s online safety organization. This partnership marks a shift from their previous adversarial stance and aims to create a ballot initiative that could allow California voters to decide on new rules for how young people interact with AI chatbots. The issue has drawn attention from parents and lawmakers across the country, highlighting the growing concerns around the impact of artificial intelligence on youth.

The deal was first reported by an unnamed source and comes after both parties had filed competing ballot initiatives aimed at restricting kids’ interactions with AI. By reaching this agreement, they have avoided what could have been a costly legal battle in this year’s election. Both sides view the new proposal as a potential step toward establishing a national standard for regulating AI use among young people.

Chris Lehane, OpenAI’s global affairs chief and a Democratic political veteran, emphasized during a joint press conference with Common Sense Media that the goal is to take this initiative beyond California. “We hope to not just take this to California but well beyond,” he said, highlighting the potential for broader implications.

Key Provisions of the Proposal

The new proposal includes several key requirements for AI companies. These include determining a user’s age and implementing safeguards for young users, limiting the sale of children’s data, and other provisions designed to protect minors. Jim Steyer, CEO of Common Sense Media, stated that the collaboration aims to provide the strongest protections in the country for kids, teens, and families.

This move comes amid increasing public criticism of OpenAI and other chatbot creators, who are facing lawsuits related to children’s interactions with artificial intelligence, including incidents involving teen suicides. Collaborating with Common Sense Media may help OpenAI mitigate some of this criticism from lawmakers and parents.

However, the measure still needs to collect enough signatures to be officially placed on the November ballot. This process can cost up to $10 million, and it may begin as early as next month. There is also a possibility that state lawmakers could introduce their own legislation on chatbots, which might lead to the measure being removed from the ballot.

Legislative and Industry Considerations

Steyer, whose group will now lead the compromise ballot effort, mentioned that they have already begun discussions with leaders in Sacramento. He expressed confidence that the Legislature will soon present a comparable package. “We’re going to pursue every measure, from the Legislature to the ballot,” Steyer added.

Lehane confirmed that OpenAI would establish a ballot committee to support the initiative. Several lawmakers are currently working on legislation to restrict AI chatbots, including proposals to impose age-check requirements on chatbot platforms and temporarily ban AI-powered toys for children under 13.

The ballot proposal seeks to build upon an AI chatbot law signed by Governor Gavin Newsom last year. This law aims to make “companion” chatbots safer for young people and requires companies to detect, remove, and respond to instances of suicidal ideation by users.

Legislative Perspectives and Challenges

State Senator Steve Padilla, the author of the previous law, praised the compromise but expressed a preference for the Legislature to address the evolving issue. He criticized OpenAI for attempting to “hijack” his law to craft the language for its initial initiative. Padilla emphasized the need for a public process that involves all stakeholders.

Assemblymember Rebecca Bauer-Kahan welcomed the collaboration between the two groups but reiterated the legislature’s responsibility to protect California’s children. She stated that the legislature will evaluate the proposal and consider its own actions to ensure community safety.

Steyer mentioned that his group has also engaged in discussions with other major tech companies about the initiative, though no specific names were provided. As of Friday afternoon, no other major AI model makers had publicly supported or opposed the new initiative.

Balancing Regulation and Innovation

The compromise does not include an outright ban on chatbot usage for kids. OpenAI’s initial measure focused on young people’s interactions with chatbots, while Steyer’s first proposal was more comprehensive, including bans on cell phones in schools and requiring AI literacy education in California. These elements were omitted from the revised version.

The initiative would empower the state attorney general to enforce child safety requirements and impose penalties on companies found in violation. Chatbot makers would be required to publish their child safety policies and undergo outside audits. Additionally, the measure would prohibit AI companies from targeting advertising to children or selling their personal data without parental approval. Companies would also need to provide parental controls and offer parents the option to be notified if a child expresses an intent to harm themselves.

Certain uses of AI systems, such as those used only for commercial purposes by businesses, video game features resembling chatbots, and products like smart speakers, would be excluded from the proposal.

Potential Opposition and Concerns

Despite these efforts, the ballot measure could face opposition from other children’s safety groups that advocate for even stricter measures. These groups may be wary of having a dominant AI company like OpenAI involved in shaping such regulations.

Critics may also challenge the risk assessment and audit requirements, arguing that they could be overly burdensome for startups and hinder new AI companies from competing with established industry leaders. This could raise concerns among venture capital firms like Andreessen Horowitz and Y Combinator, which invest in “Little Tech” firms and startups.

Steyer acknowledged the delicate balance when it comes to startups, noting that he has heard from smaller firms who want to ensure that legislation does not put them out of business. This highlights the ongoing challenge of creating effective regulation while supporting innovation in the AI sector.