Your resource for web content, online publishing
and the distribution of digital products.
S M T W T F S
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 

Character AI in legal trouble after 14-year-old’s devastating loss

DATE POSTED:October 24, 2024
Character AI in legal trouble after 14-year-old’s devastating loss

Character AI is facing legal action following the tragic death of 14-year-old Sewell Setzer III, who died by suicide after using the platform.

His mother, Megan Garcia, has filed a lawsuit against the company, its founders Noam Shazeer and Daniel De Freitas, and Google. The suit claims negligence, wrongful death, and product liability, alleging that the platform lacked the necessary safety measures to protect vulnerable users like her son.

A tragic incident

Setzer had been using Character AI for several months, interacting with bots modeled after fictional characters, including Daenerys Targaryen from Game of Thrones. On February 28, 2024, moments after his last interaction with the platform, Setzer tragically ended his life. The lawsuit raises serious concerns about the platform’s safeguards—or lack thereof—for minors and vulnerable users.

Character AI has come under scrutiny for its anthropomorphized bots that mimic both fictional characters and real-life personas. This personalization, while engaging for many users, blurs the line between fantasy and reality, particularly for teenagers who may not fully grasp the implications of these interactions.

In response to this incident, Character AI implemented updates, including filters and alerts to warn users about sensitive content. However, many critics argue these measures are insufficient. The company has faced similar controversies in the past, such as the unauthorized creation of a chatbot based on Jennifer Ann Crecente. Her family was forced to take legal action to have her likeness removed from the platform.

Character AI in legal trouble after 14-year-old’s devastating loss  Setzer had been using Character AI for several months, interacting with bots modeled after fictional characters

The lawsuit suggests Character AI prioritized speed over safety in its development. According to the complaint, founder Noam Shazeer voiced frustrations over delays at Google due to concerns about “brand risk” during the development of the Meena LLM model. This, the lawsuit claims, points to a pattern of putting AI development ahead of user safety.

This is not the first time Character AI has faced backlash. In a previous case, a chatbot was created without permission based on Jennifer Ann Crecente, a young woman who had passed away 18 years earlier. Her family had to take legal steps to remove her likeness from the platform, highlighting ongoing concerns about consent and privacy in AI.

As AI technology advances, the responsibility to protect users—especially vulnerable ones like teenagers—falls squarely on the shoulders of developers. Garcia’s lawsuit suggests that Character AI rushed to release its product, potentially sacrificing user safety in favor of faster development.

This heartbreaking case is a reminder that while AI can offer innovative tools and experiences, it must be built and deployed with care.

Image credits: Kerem Gülen/Midjourney