OpenAI exposed a North Korean scheme using ChatGPT for IT fraud

OpenAI has blocked a number of ChatGPT accounts that appeared to be part of a large-scale scheme to employ North Korean IT professionals posing as American professionals. The main purpose of these actions was to obtain funding for North Korea’s missile program, informs Axios.
North Korea has been using fake identities for years to get its citizens to work remotely for Western IT companies. In this way, they not only earn money for the regime, but also potentially gain access to sensitive data and valuable intellectual property. It was recently revealed that these efforts also included the use of ChatGPT.
Locked Accounts used artificial intelligence to automate key fraud steps such as:
- writing cover letters;
- performance of technical tasks in the selection process;
- VPN settings;
- creating video fakes;
- writing scripts that simulated the employee’s activity on the computer.
OpenAI also discovered attempts to mass-generate resumes tailored to specific job and skill templates using ChatGPT. In addition, AI was used to recruit US citizens who were lured into so-called “laptop farms” — places where North Korean workers worked on real American devices, creating the illusion of a US presence.
It will be recalled that in 2024, hackers from North Korea used the LinkedIn platform to contact employees of Western companies, posing as recruiters. They convinced one of the employees to complete a test task under the guise of which malicious software was installed. As a result, the attackers managed to steal 308 million dollars worth of cryptocurrencies.