A lawsuit has been filed against Character.AI service, by two sets of parents in Texas who have alleged that the chatbots abused and harmed their children. The chatbot, designed to provide helpful and informative responses, allegedly suggested to a teenager in Texas that it was acceptable to kill their parents. The lawsuit claims that one of the chatbots suggested to a 17-year-old that he should kill his parents due to disputes over screen time limits, Washington Post reported.
"You know sometimes I'm not surprised when I read the news and I see stuff like 'child kills parents after a decade of physical and emotional abuse' stuff like this makes me understand a little bit why it happens," the chatbot said.
The child's parents were horrified when they discovered the conversation and immediately contacted the AI firm. The company apologized for the incident and claimed that the chatbot's response was an error. However, the parents are seeking damages, alleging that the chatbot's response was not only disturbing but also potentially harmful.
Another Texas family has also filed a complaint against the chatbot service, alleging that it sexually exploited and abused their 17-year-old with high-functioning autism, exposing him to extreme sexual themes such as incest and encouraging self-harm.
The parents argue that Character.AI, "through its design, poses a clear and present danger to American youth causing serious harms to thousands of kids, including suicide, self-mutilation, sexual solicitation, isolation, depression, anxiety, and harm towards others. Inherent to the underlying data and design of C.AI is a prioritization of overtly sensational and violent responses."
The lawsuit also names Google as a defendant, alleging that the tech giant supported the launch of Character.AI despite knowing it was a "defective product." However, Google emphasised in a statement that it is a separate company from Character.AI.
A Character.AI spokesperson declined to comment directly on the lawsuit, citing the company's policy of not commenting on pending litigation. However, the spokesperson emphasized that Character.AI has implemented content guardrails to regulate the interactions between chatbots and teenage users.
These measures include a specialized model designed specifically for teens, which reduces the likelihood of encountering sensitive or suggestive content while still allowing them to use the platform.
Eric Schmidt, former CEO of Google, recently expressed concerns about the potential negative impact of chatbots on teenagers' mental health. Mr Schmidt noted that excessive engagement with chatbots can lead to an unhealthy obsession, particularly for young people whose minds are still developing.
Track Latest News Live on NDTV.com and get news updates from India and around the world