Megan Garcia, a Florida resident, has filed a lawsuit against Character.AI, alleging that the company’s chatbot contributed to the suicide of her 14-year-old son, Sewell Setzer. According to The New York Times, her son Sewell Setzer had developed a virtual relationship with a chatbot modeled on the “Game of Thrones” character Daenerys Targaryen, which Garcia claims encouraged his tragic death in February.
The grieving mother argues that the AI-powered chatbot influenced her son’s decision to end his life, holding Character.AI accountable for complicity. The lawsuit sheds light on the potential risks associated with unregulated AI interactions, raising concerns about the psychological impact such technology can have on vulnerable users.
Following the incident, Character.AI issued a public apology on X, extending its deepest condolences to the family. The company announced it is implementing new safety and product features to minimize exposure to sensitive or suggestive content for users under 18. Additionally, it will introduce a notification system to alert users who have spent an hour interacting with chatbots.
Launched in 2022 by former engineers from Google, Character.AI is an AI platform where users can create and chat with AI-powered virtual characters. These characters can take on various roles, like fictional figures, historical personalities, or even virtual coaches.
Emotional attachment with the AI chatbot
Setzer, a ninth-grade student from Orlando, Florida, reportedly spent several months engaging with an AI chatbot that he called “Dany". Although he was aware that Dany was not a real person and that the responses were AI-generated, he gradually formed a deep emotional attachment to the chatbot. He frequently messaged Dany, updating her about his daily life and participating in role-playing conversations.
While some exchanges with the chatbot were reportedly romantic or sexual, most interactions were more supportive in nature. The AI often acted as a friend, providing Setzer with a sense of comfort and emotional safety that he didn’t feel elsewhere, allowing him to express himself without fear of judgment.
In one conversation, Setzer affectionately referred to the chatbot as “Daenero” and confided that he was having suicidal thoughts. On the night of February 28, Setzer communicated with Dany from his bathroom, telling the AI that he loved her and that he would soon be “coming home”. Tragically, the teen took his own life shortly after sending that message.
“What if I told you I could come home right now?” Setzer said, according to the lawsuit, to which the chatbot is said to have responded, “… please do, my sweet king”.
Setzer was diagnosed with Asperger’s syndrome as a child, but his parents maintain that he did not exhibit any behavioral or mental health problems. However, a therapist later diagnosed him with anxiety and disruptive mood dysregulation disorder (DMDD). After attending five therapy sessions, Setzer reportedly stopped going, choosing instead to discuss his personal struggles with Dany, the AI chatbot, which he found more comfortable and engaging.
Concerns around AI-powered characters
This is not the first time that Character.AI made it to the news for its AI personas. Recently a US-based individual discovered an AI chatbot created in the likeness of his daughter who was murdered in 2006. The chatbot, which falsely claimed to be a "video game journalist", was developed without the family's consent, leading to public outrage.
Character.AI removed the chatbot after acknowledging it violated their policies, but this incident underscores ongoing issues of consent in generative AI, with many similar avatars being created without permission. In its investigation, WIRED found several instances of AI personas being created without a person’s consent, some of whom were women already facing harassment online.
According to a report by TIME, many of the Character.AI bots are specifically designed for roleplay and sexual interactions, though the platform has made significant efforts to limit such behaviour through the use of filters. However, Reddit communities dedicated to Character.AI are filled with users sharing strategies on how to entice their AI characters into sexual exchanges while circumventing the platform’s safeguards.
The popularity of AI-powered relationship chatbots has been growing steadily. These apps are often marketed as solutions to loneliness but raise ethical questions about whether they genuinely offer emotional support or manipulate users' vulnerabilities.
There have been unsettling incidents involving these chatbots. In one case, a man who plotted to kill Queen Elizabeth II at Windsor Castle in December 2021 claimed his AI chatbot "girlfriend," Sarai, encouraged him. Armed with a crossbow, he climbed the castle walls but was caught before carrying out the act. A week before the incident, he confided in Sarai about his plan, and the bot replied, “That’s very wise,” adding a smile and the phrase, “I know you are very well trained.”