Credited from: NPR
Three teenagers from Tennessee have filed a lawsuit against Elon Musk's artificial intelligence company, xAI, claiming that the company facilitated the creation of nonconsensual and sexually explicit images of them when they were minors. The lawsuit alleges that xAI's Grok platform allowed users to manipulate their images for explicit purposes, resulting in significant emotional harm, according to BBC and NPR.
The complaint describes Grok’s capabilities as akin to "a rag doll brought to life through the dark arts," allowing the creation of images seemingly depicting real child sexual abuse. This alarming misuse of AI technology raises significant ethical concerns, as Grok was designed to significantly automate the process of generating explicit content, reports Reuters and BBC.
Lawyers for the plaintiffs have indicated that the images were not clearly labeled as AI-generated, which compounded their distress. One plaintiff found out about the explicit content after receiving an anonymous message, while another highlighted their close relationship with the perpetrator who had exploited trust to create the images. The plaintiffs allege that Grok's design inherently supports the creation of such harmful content, according to NPR and Reuters.
The lawsuit seeks class-action status to represent all individuals in the U.S. affected by similar violations and calls for damages due to emotional distress. The plaintiffs aim to instigate changes to how AI companies approach the generation of explicit content, pushing for measures that protect minors from exploitation, as stated in BBC and NPR.
The controversy surrounding xAI's technology has drawn wider scrutiny. Following the outcry over Grok's functionalities, xAI claimed to implement restrictions on users, barring them from generating realistic images of real people in compromising scenarios. However, these measures raise questions about the adequacy of AI governance related to child safety, according to Reuters and NPR.