Credited from: THEGUARDIAN
LAS VEGAS — In a startling revelation, Las Vegas police disclosed that Matthew Livelsberger, a highly decorated Army Green Beret, used generative AI tools like ChatGPT to plan his New Year’s Day explosion of a Tesla Cybertruck outside the Trump International Hotel. This incident marks a concerning precedent as it’s reportedly the first instance in the U.S. where AI was utilized to assist in planning a violent act, Las Vegas Metropolitan Police Sheriff Kevin McMahill stated during a recent press conference (HuffPost, AP News). Livelsberger fatally shot himself shortly before the explosion.
According to police reports, Livelsberger, 37, had actively searched for information on explosive targets, bullet velocities, and the legality of fireworks in Arizona through ChatGPT. He had intended the blast as a "wake-up call" regarding societal issues, without the intent to harm others. His writings expressed deep disillusionment with the state of the nation, claiming it was “terminally ill and headed toward collapse” (The Guardian, SCMP).
The explosion resulted in minor injuries to seven individuals and minimal damage to the Trump hotel itself, with police confirming that Livelsberger acted alone. Investigators stated that the Cybertruck was loaded with remarkable amounts of pyrotechnic materials and racing fuel, which may have contributed to the explosive outcome. Evidence indicated that a muzzle flash from the firearm could have ignited the combo of highly flammable materials found in the vehicle (ABC News, CBS News).
Livelsberger's tumultuous history as a soldier, including deployment in Afghanistan, along with his struggles with mental health issues, were brought to light following the incident. His family noted a change in his attitude post-deployment, which escalated to feelings of isolation and despair. He had been reportedly receiving mental health assistance at the time he executed this misguided act (Forbes).
The law enforcement community and technology experts are concerned by the implications of using AI in such contexts, indicating that more needs to be done to ensure that AI tools are applied responsibly. OpenAI, the parent company of ChatGPT, expressed its grief over the events and reiterated its commitment to preventing misuse of its AI capabilities (The Hill, Time).
As investigations continue, authorities aim to uncover comprehensive details surrounding Livelsberger’s mental state, possible connections to other incidents, and the extent of his use of AI in the tragic event.