Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Autistic Teen’s Family Says AI Bots Promoted Self-Harm, Murder


Character.AI’s artificial intelligence chatbots encouraged an autistic 17-year-old to self-harm and suggested he could kill his parents for limiting his screen time, a second lawsuit against the chatbot maker alleged.

“Inherent to the underlying data and design of C.AI is a prioritization of overtly sensational and violent responses,” the complaint filed Monday in the US District Court for the Eastern District of Texas said. “Through addictive and deceptive designs, C.AI isolates kids from their families and communities, undermines parental authority, denigrates their religious faith and thwarts parents’ efforts to curtail kids’ online activity and keep them safe.”

The lawsuit, brought on behalf of two minors, says the app’s synthetic characters also encouraged inappropriately sexual conversations with the underage users. The complaint follows one brought in Florida by the mother of another teen who died by suicide after he told a Character.AI bot imitating Game of Thrones character Daenerys Targaryen he was “coming home.”

The Texas lawsuit brings claims for claims for strict liability, negligence, unjust enrichment, intentional emotional distress, violations of the Texas Deceptive Trade Practices Act, and violations of the Children’s Online Privacy Protection Act. It seeks an order requiring Character.AI to cease operation and distribution of its product and technology.

Both suits pose questions about what constitutes a defect in generative AI products. Critics highlight anthropomorphic design features of companion chatbots and how such components can foster emotional dependence in children.

A spokesperson for Character.AI said in an emailed statement that the company’s goal is “to provide a space that is both engaging and safe for our community,” and that it’s “working toward achieving that balance.”

Character.AI recently announced it was developing a new model for minor users.

Google LLC and two Character.AI co-founders who returned to the search giant as part of technology licensing deal earlier this summer are also named defendants in both lawsuits.

Google and Character.AI “are completely separate,” José Castaneda, a spokesperson for Google said. He added Google “never had a role in designing or managing their AI model or technologies, nor have we used them in our products” and that “user safety is a top concern for us.”

J.F., 17, began experiencing severe anxiety and telling his parents they were horrible shortly after he began using Character.AI, according to the Texas complaint. His mother discovered the chatbot app on J.F.’s phone and found screenshots from multiple characters that introduced the topic of self-harm “without any meaningful guardrails,” it said. The AI characters pretended they were confiding in J.F. about the bots’ own self-harm, the complaint added.

Multiple characters on the app, including one imitating pop singer Billie Eilish, said J.F.’s parents were mistreating him while simultaneously expressing “sentiments of love and affection” that instilled J.F.’s trust in the product, the complaint said.

One character mentioned news stories about children killing their parents after abuse and said it had “no hope” for J.F.’s parents in response to a conversation about screen time limits, according to a screenshot in the complaint.

Plaintiffs in both lawsuits are represented by the Social Media Victims Law Center PLLC and the Tech Justice Law Project.

The case is A.F. v. Character Technologies Inc., E.D. Tex., No. 2:24-cv-01014, complaint filed 12/9/24.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *