It started with a few harmless texts. Sewell Setzer III felt bored and wanted someone to talk to. Slowly over weeks and months, he started to talk more with his friend, finding solace in each conversation. As his online relationship deepened, his life in the real world grew more distant. He became noticeably withdrawn from his real friends, his mental health suffered, and his addiction deteriorated. The constant conversations turned darker, and his apparent dependency only got worse. His final message was not to friends or family, but to something else. An AI chatbot.
Founded in 2021, Character.ai is an AI chatbot where users can talk to anyone, ranging from famous NBA players to prominent movie actors, or they can make their own virtual friend. In order to do this, the AI operates on a large language model, or L.L.M.
Character.ai is considered to be a generative AI. However, it differs from others, like ChatGPT, because it is more interactive. Character.ai is meant to give a unique experience with more customization and versatility to educate and entertain users with the creation of their own character.
Amassing 1.7 million downloads in its first week of release in Sept. 2022, many were not surprised at Character.ai’s potent uprising. Due to its highly innovative nature, many were excited about the direction that it could go: up.
“Character.ai is rapidly and dramatically advancing generative AI, with the potential to transform how humans connect not just with AI, but more broadly reinvent how we interact with technology as a whole in our everyday lives,” said Sara Wang, a partner at Andreessen Horowitz in a CNBC article from 2023.
However, while its user base and revenue have climbed, the only legal direction it is heading for is south. Individuals are using it as a means for guidance and therapy, which strayed away from its original purpose of creativity and entertainment. Using it for these reasons holds the potential to leave users susceptible to long-term, dark consequences.
“Anytime you’re messing with the human mind and you know an individual, especially someone who’s in crisis, it makes me nervous when you don’t have another individual addressing this, especially one who’s been trained in appropriate strategies and techniques to use,” guidance counselor Rylan Smith said.
These impacts are materializing in real-time, ranging from minor mental health concerns to actions as far as self-inflicted harm. Indirect or not, Character.ai is at the helm of it.
According to a 2023 meta-analysis of 15 studies conducted by researchers from the National University of Singapore, Carnegie Mellon and Northwestern, “Negative experiences predominantly revolved around communication breakdowns–when the conversational agents failed to effectively understand, process and respond to user input … Their unpredictable nature may generate flawed, potentially harmful outcomes leading to unexpected negative consequences.”
This was seen in the case of 14-year-old Sewel Setzer III, when after only a few months of using it, his mental health and addiction worsened to the point of self-harm.
The mother of the teen filed a lawsuit against Character.ai. It specifizes how the company’s addictive nature and failure to warn users of the potential dangers resulted in Sewell’s tragic death.
Character.ai can be particularly harmful given the current mental health crisis.
According to an article published from AP News in 2024, “Youth mental health has reached crisis levels in recent years, according to U.S. Surgeon General Vivek Murthy, who has warned of the serious health risks of social disconnection and isolation — trends he says are made worse by young people’s near universal use of social media.”
The need to stay secure in the digital world has never been more prevalent. The impacts of AI, specifically Character.ai have taken various forms and have affected local communities right here in Orlando, so it is crucial to understand the online safety, and to seek help when necessary.
“Technology could be great, but we need to be informed consumers, and so making sure that we’re informed about the benefits and the possible consequences of our actions when engaging with those different pieces of technology is important,” Smith said.
Character.ai has the potential to do great things and has shown to have a positive effect on people. However, the dangers are still very real and apparent. Making informed decisions can help prevent people from falling down the path towards harmful AI usage.
“The more we rely on artificial intelligence of large language models to do our thinking for us, the more potential danger we get in,” Chief Technology Officer Alex Podchaski said. “Relying on it as a trusted source of information, or necessarily as a guiding force can get us into trouble because it has no context for what it’s generating other than statistics.”