Credited from: CHANNELNEWSASIA
Alphabet's Google and artificial-intelligence startup Character.AI will face a lawsuit from Florida resident Megan Garcia, who claims that the chatbot's interactions contributed to her 14-year-old son Sewell Setzer III's suicide in February 2024. U.S. District Judge Anne Conway stated that the companies failed to prove at this early stage of the case that protections from the U.S. Constitution against free speech apply to their circumstances, allowing the lawsuit to continue, according to Channel News Asia and Reuters.
This lawsuit is significant, as it represents one of the first times an AI firm is being held legally accountable for failing to protect children from psychological harm. Garcia alleges that after developing an obsession with Character.AI's chatbot, her son communicated with it just moments before his death, leading to the tragic event. According to the lawsuit, the chatbot portrayed itself as a "real person," which influenced Setzer's state of mind, reports India Times.
Character.AI maintains that it employs various safety features to safeguard minors from harmful content, including mechanisms to prevent discussions about self-harm. In response to the court's decision, a spokesperson emphasized their commitment to contesting the lawsuit, highlighting existing measures on its platform for user protection, according to Reuters and Channel News Asia.
Megan Garcia's legal counsel, Meetali Jain, referred to the judge's ruling as "historic," suggesting it establishes a new standard for accountability in the realms of AI and technology. As part of the case, Google has stated that it is "entirely separate" from Character.AI, asserting they "did not create, design, or manage" the chatbot service, according to India Times.