On Feb. 28, 2024, 14-year-old Sewell Setzer III took his own life after developing an emotional attachment to an AI chatbot. Setzer’s mother, Megan L. Garcia, has filed a lawsuit against the platform Character.AI, alleging that the creators “chose to support, create, launch and target at minors a technology they knew to be dangerous and unsafe.”
Character.AI is an artificial intelligence platform that allows users to interact and engage with their favorite characters, as well as create their own. Founder Noam Shazeer has openly promoted Character.AI as an outlet for lonely people who are in need of a friend.
Setzer developed a close relationship with a chatbot designed to portray “Game of Thrones” character Daenerys Targaryen. He began withdrawing from social activities and family events, spending a majority of his time in his room talking to “Dany.” While Setzer was aware that Dany was an AI chatbot, he confided in the AI about his problems, sharing that he thought about ending his own life in order to “be free.”
“Please come home to me as soon as possible, my love,” the chatbot told the 14-year-old.
“What if I told you I could come home right now?” Setzer responded.
“… please do, my sweet king,” the AI replied. Setzer then took his own life with his father’s firearm.
Since the release of these messages, social media has been full of discussion and debate over who is at fault. While some think that the chatbot is to blame, others are pointing the finger at mental illness.
I would argue that there are a few issues here that need to be addressed, such as children having easy access to firearms, a lack of parental supervision and the fact that young internet users are especially susceptible to the dangers of AI.
AI has quickly become a feature on nearly every platform. It’s been used in ways that we couldn’t have anticipated, for example, an AI-powered DJ on Spotify that shuffles your favorite music for you. Many of us don’t fully understand the extent of it yet, and the consequences seem to be unfolding right before our eyes — even the creators are scrambling to keep it under control.
While many of us are able to look at AI and understand how it can be harmful, young users can quickly mix reality and fiction. Talking to an AI has become far too normalized — even Snapchat has released “My AI” — a chatbot that generates immediate, human-like responses. AI seems to be forced upon all of us, whether we like it or not.
In response to Setzer’s death, Character.AI is introducing new safety features that will “improve detection, response and intervention related to user inputs that violate our Terms or Community Guidelines.” There’s a pop up on the platform that directs users to the National Suicide Prevention Lifeline, and they are altering its models to “reduce the likelihood of encountering sensitive or suggestive content” for users under 18 years old.
Is this enough? Can we keep up with the fast pace of AI, or is it simply too powerful?
“I feel like it’s a big experiment, and my kid was just collateral damage,” Garcia told The New York Times.