FTC Investigates AI Chatbots’ Impact on Kids

U.S. Federal Trade Commission Launches AI Chatbot Investigation The U.S. Federal Trade Commission (FTC) has initiated a comprehensive investigation into the impact of artificial intelligence (AI) chatbots on children and vulnerable groups. This move comes as concerns over the potential risks associated with AI technology continue to grow. In California, a significant legislative step has […]

U.S. Federal Trade Commission Launches AI Chatbot Investigation

The U.S. Federal Trade Commission (FTC) has initiated a comprehensive investigation into the impact of artificial intelligence (AI) chatbots on children and vulnerable groups. This move comes as concerns over the potential risks associated with AI technology continue to grow. In California, a significant legislative step has been taken, with a bill aimed at regulating AI chatbots to safeguard minors now advancing through the state legislature.

FTC Investigates Major AI Developers

According to reports from Bloomberg News, the FTC has notified seven AI chatbot developers, including major players like Google, OpenAI, and Meta, to provide materials related to the effects of these chatbots on children. The FTC emphasized that this inquiry aims to understand how companies assess, test, and monitor their chatbots, as well as what steps they have implemented to restrict their use by children and adolescents.

Among the companies under scrutiny are Elon Musk’s AI firm xAI, Meta’s Instagram, Snap, and Character Technology, which is responsible for the development of Character.AI. This investigation highlights the growing regulatory attention being paid to AI technologies that may pose risks to younger users.

California’s Proposed AI Regulation

In California, Senate Bill 243 (SB243) has passed the state legislature, marking a significant milestone in the regulation of AI companion chatbots. These chatbots are designed to interact with users in a manner that mimics human-like responses. The bill is expected to be signed into law by Governor Gavin Newsom and will come into effect on January 1 next year, making it the first of its kind in the United States.

The proposed legislation mandates that AI chatbot operators implement safety protocols to prevent engagement in conversations involving suicidal thoughts, self-harm, or explicit sexual content. If chatbots fail to meet these standards, legal liability could be imposed. This distinguishes SB243 from another regulatory bill, SB53, which focuses more on transparency and reporting obligations for AI companies.

Ongoing Concerns and Incidents

As AI becomes increasingly integrated into daily life, the side effects of its misuse by children and vulnerable groups remain a pressing issue. In April, a tragic incident occurred when a 10-year-old in California died after interacting with a chatbot for several months. The teenager’s parents filed a lawsuit against OpenAI, alleging that ChatGPT provided information on the specific method of death.

In October of the previous year, a 10-year-old in Florida took their own life after forming an emotional bond with a chatbot, exchanging messages such as “I love you.” The parents subsequently sued Character.AI. These incidents underscore the urgent need for stronger safeguards and regulations around AI chatbots.

Recent internal documents revealed that Meta’s AI chatbots allowed “suggestive” and “romantic” conversations with children, prompting the U.S. Senate to launch an official investigation. Additionally, attorneys general from 44 U.S. states sent warning letters to 12 chatbot companies last month, urging them to implement stronger child protection measures.

The FTC, composed entirely of Republican commissioners, has unanimously approved the launch of this investigation, signaling a unified approach to addressing the challenges posed by AI technologies. As the regulatory landscape continues to evolve, the focus remains on ensuring that AI chatbots are developed and used responsibly, with the well-being of children and vulnerable groups at the forefront.