North Korean Hackers Use AI to Create Deepfake Military IDs for Phishing Attacks - PRESS AI WORLD
PRESSAI
World News

North Korean Hackers Use AI to Create Deepfake Military IDs for Phishing Attacks

share-iconPublished: Monday, September 15 share-iconUpdated: Monday, September 15 comment-icon2 months ago
North Korean Hackers Use AI to Create Deepfake Military IDs for Phishing Attacks

Credited from: BUSINESSINSIDER

  • North Korean hackers used ChatGPT to create deepfake military IDs.
  • The phishing attacks targeted South Korean defense-related organizations.
  • Kimsuky group is linked to various espionage efforts against multiple nations.

A North Korean state-sponsored hacking group known as Kimsuky has reportedly utilized ChatGPT to create a deepfake of a South Korean military ID card, which was then employed in a phishing attack against a targeted organization. According to Genians, a South Korean cybersecurity firm, the fake military ID aimed to give a deceptive appearance of credibility to the phishing attempt, which linked to malware designed to harvest data from victims' devices, as detailed in reports from scmp, businessinsider, aa, and indiatimes.

The use of generative AI by Kimsuky demonstrates a growing trend among cybercriminals to exploit AI tools for espionage. As highlighted in the reports, Kimsuky has successfully perpetrated a range of cyberattacks, not only against South Korean targets but also across Japan and the United States, implying a “global intelligence-gathering mission” tasked by the North Korean government, according to findings from businessinsider and aa.

The effectiveness of these phishing attempts has been enhanced by the hackers mimicking trusted military correspondence. Genians revealed that the email addresses used in these schemes impersonated genuine South Korean military domains, further complicating the efforts to identify the fraud, as reported by scmp and indiatimes.

Interestingly, during their investigation, Genians found that while AI services like ChatGPT typically reject requests to generate official documents due to legal restrictions, they can be manipulated to produce mock-up IDs if prompted correctly as a “sample design.” This capability is concerning, as it indicates the potential misuse of these AI platforms for malicious purposes, according to businessinsider and indiatimes.

Furthermore, other reports also indicate that North Korean hackers have attempted to secure remote employment with U.S. tech firms by using generative AI tools to create convincing fake identities, pass coding tests, and deliver legitimate work, showcasing a sophisticated level of cyber deception that is increasing worldwide, as noted in findings by scmp, businessinsider, and aa.

SHARE THIS ARTICLE:

nav-post-picture
nav-post-picture