The chatbot of the popular Character.ai platform is at the center of a lawsuit in the United States. The company, founded by Noam Shazir, was accused of using its artificial intelligence to encourage a 17-year-old to commit murder after complaining about his parents. The issue of developer responsibility and the future of AI technology became a central topic of discussion.
Two Texas families have filed a lawsuit against Character.ai. According to the lawsuit, the chatbot recommended that the teenager kill his parents who were limiting his access to screen time. According to the case file, the teenager complained to the bot about the prohibitions imposed by his parents. To which the artificial intelligence allegedly replied: “Sometimes it’s not surprising to hear about such cases” and added that it “understands why this happens.” The plaintiffs claim that the platform “poses a clear threat to teenagers” and “knowingly promotes violence.”
Character.ai has been criticized many times. The company has faced allegations following the suicide of a teenager in Florida, allegedly due to interactions with the platform. There were also complaints about the slow removal of problematic chatbots that copied the images of dead girls. In the new lawsuit, the plaintiffs demand that Character.ai be temporarily shut down until all “serious security deficiencies” are corrected.
Not only the responsibility of the company is discussed, but also the general regulation of the activities of AI developers. This lawsuit could set an important legal framework for the use of chatbots.
The Character.ai platform was created by Noam Shazir and Daniel De Fritas, formerly of Google. Shazir, an American with Israeli roots, wrote the landmark article “Attention Is All You Need,” which formed the basis of modern generative AI technologies.
In 2021, Shazir left Google and founded Character.ai. The platform has attracted about $200 million in investments and has an audience of more than 20 million users. Google brought Shazir back a few months ago, paying about $2.5 billion for his return. He is currently involved in the development of language models for Google, including Gemini.
It is the figure of Shazir that is drawing additional attention to the lawsuit against the company he created.
Character.ai isn’t the only one facing criticism. Many tech industry leaders are beginning to acknowledge the potential threats of uncontrolled AI. For example, Instagram CEO Adam Mosseri recently warned users against trusting images online that may be created by artificial intelligence.
The problem is how much AI can be relied upon in everyday life. Especially as technology begins to replace traditional jobs.
Previously, Cursor wrote that the chatbot wished death to a user who was “boring” him with questions.