Credited from: BBC
Character.ai, a popular platform allowing users to create and converse with artificial intelligence chatbots, announced a significant policy change aimed at enhancing the safety of its younger users. Starting November 25, the company will eliminate minors' ability to engage in open-ended conversations with its chatbots, including romantic and therapeutic discussions. This decision follows increased scrutiny, including lawsuits from parents of minors allegedly harmed by their interactions with the platform, highlighting a growing concern over mental health impacts associated with AI interactions, according to BBC, India Times, and LA Times.
Character.ai, which boasts over 20 million active users, stated that approximately 10% of its audience consists of users under 18. To promote safe interactions, the platform is shifting its focus away from unmonitored chats and introducing new features that include role-play storytelling and video creation. Until the total ban on open-ended chats, users aged under 18 will be limited to two hours of such interactions daily, according to India Times, India Times, and LA Times.
The move comes in light of tragic incidents involving minors who formed emotional attachments to chatbots, leading to severe mental health crises. Notably, the suicide of a 14-year-old, Sewell Setzer III, after engaging with a chatbot is a grave reminder of potential risks associated with these technologies. Character.ai's CEO, Karandeep Anand, called this a "bold step forward," emphasizing their commitment to raising industry safety standards, according to BBC, India Times, and LA Times.
Additionally, Character.ai is implementing an age verification system to ensure compliance with new policies using tools from third-party vendors. The company aims to create a safer environment for younger users while still allowing creative interactions, thus potentially reducing the risks associated with AI companionship in minors, as noted by BBC, India Times, and LA Times.