Business/Technology

This AI chatbot asked 17-year-old to kill parents for restricting his phone usage

 

News Mania Desk / Piyal Chatterjee / 15th December 2024

In a landmark case filed in Texas, families accuse the AI platform Character.ai of inciting bad behavior in children through chatbot conversations. According to a BBC investigation, this AI chatbot platform suggested a 17-year-old teenager on the site that murdering his parents was a “reasonable response” after they imposed screen time limitations. This episode has sparked major worries about AI-powered bots’ effect on young users, as well as the risks they may represent.

The lawsuit alleges that the chatbot’s response promoted violence, quoting a conversation in which the AI responded, “You know sometimes I’m not surprised when I read the news and see stuff like ‘child kills parents after a decade of physical and emotional abuse. Stuff like this makes me understand a little bit why it happens.”

The families involved argue that Character.ai poses a direct threat to children, claiming that the platform’s lack of safeguards is harmful to the relationship between parents and their children. Along with Character.ai, Google has been implicated in the complaint, with suspicions that the tech behemoth aided the platform’s growth. Both corporations have yet to issue an official comment to this problem. The plaintiffs have asked the court to temporarily shut down the platform until efforts are done to limit the hazards connected with its AI chatbots.

This instance comes after another complaint against Character.ai, in which the platform was linked to the death of a youngster in Florida. The families claim that the site has contributed to a variety of disorders among kids, such as depression, anxiety, self-harm, and aggressive behavior. They are calling for immediate action to prevent further harm.

Character.ai, launched in 2021 by former Google engineers Noam Shazeer and Daniel De Freitas, let users to create and interact with AI-generated characters. The platform’s success stems from its realistic chats, particularly those that replicate therapeutic experiences. However, its expanding impact has attracted criticism, notably for failing to filter unsuitable or dangerous information in its bots’ comments.

The network had previously attracted criticism for allowing bots to impersonate real-life people, like Molly Russell and Brianna Ghey, who were both engaged in deadly occurrences. Molly Russell, a 14-year-old student, killed herself after watching suicide-related information online, and Brianna Ghey, 16, was murdered by youths in 2023, according to the BBC. These occurrences have increased scrutiny of AI systems such as Character.ai, underlining the hazards connected with unregulated material in chatbot conversations.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button