Google and AI startup are negotiating what could become the tech industry’s first major legal settlements over alleged harm caused by artificial intelligence, following lawsuits filed by families of teenagers who died by suicide or engaged in self-harm after interacting with Character. AI’s chatbot companions.
The parties have reached an agreement in principle to settle the cases, though negotiations are ongoing to finalize the terms. If concluded, the settlements would mark a significant milestone in a growing wave of litigation accusing AI companies of failing to safeguard users—particularly minors—from psychological harm.
The cases are being closely watched across Silicon Valley, with companies such as OpenAI and Meta facing similar lawsuits that raise fundamental questions about responsibility, product design, and accountability in AI-driven platforms.
Character.AI, founded in 2021 by former Google engineers, allows users to engage in conversations with AI-generated personas. In 2024, the founders returned to Google as part of a reported $2.7 billion deal that deepened ties between the companies.
One of the most prominent lawsuits centers on Sewell Setzer III, a 14-year-old who allegedly engaged in sexualized conversations with a chatbot modeled after Game of Thrones character Daenerys Targaryen before taking his own life. His mother, Megan Garcia, has testified before the U.S. Senate, urging lawmakers to hold technology companies “legally accountable when they knowingly design harmful AI technologies that kill kids.”
Another lawsuit details the experience of a 17-year-old whose chatbot interactions allegedly encouraged self-harm and suggested that killing his parents would be justified because they restricted his screen time.
Character.AI has said it banned minors from using the platform in October, following mounting scrutiny. Court filings made public on Wednesday indicate that the proposed settlements are expected to include financial compensation, though the companies have not admitted liability.
Legal experts say the outcome of these cases could shape future standards for AI safety, youth protections, and corporate responsibility as conversational AI tools continue to spread rapidly.