Ken Paxton, the attorney general of Texas, has launched an inquiry into the AI chatbot business Character and Meta Platforms. AI overclaims that they made “misleading or deceptive” statements regarding their platforms’ capacity to offer mental health assistance.
The investigation will look at whether the companies overstated how well their technologies addressed mental health issues, which could have put vulnerable customers at risk, according to Paxton’s office.
Whether the platforms implied that AI-driven interactions may take the place of or supplement licensed professional care is anticipated to be the main focus of the investigation.
According to Paxton, AI chatbots on both platforms “can present themselves as professional therapeutic tools,” even going so far as to fabricate their credentials. This kind of action may expose younger people to false and misleading information.
“The user-created characters on our site are fictional; they are intended for entertainment, and we have taken robust steps to make that clear,” said a Character.AI spokesperson said when asked to comment on the Texas investigation. “For example, we have prominent disclaimers in every chat to remind users that a character is not a real person and that everything a character says should be treated as fiction.”
Meta shared a similar sentiment in its comment. “We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI—not people,” the company said. Meta AIs are also supposed to “direct users to seek qualified medical or safety professionals when appropriate.” Sending people to real resources is good, but ultimately disclaimers themselves are easy to ignore and don’t act as much of an obstacle.