The misuse of AI technology is becoming an increasing concern. Recently, OpenAI detected that its AI models, including ChatGPT, were being used for propaganda and fraud. The company took action against these threats by banning certain accounts, particularly those linked to China and North Korea. But what exactly were these accounts involved in? Here are the details…
OpenAI Strikes a Major Blow Against Suspicious Users
OpenAI has announced that it has shut down some user accounts originating from China and North Korea. According to the company, these accounts were abusing AI technology for activities such as espionage, disinformation, and financial fraud.

A report published by OpenAI revealed that some China-based users were using ChatGPT to generate anti-U.S. news articles. These articles were then published in certain Latin American media outlets and distributed under the name of a fake Chinese company.
Meanwhile, some North Korea-linked accounts attempted to create AI-generated fake resumes and professional profiles to secure jobs in companies across Europe and the United States. It is believed that this was part of North Korea’s efforts to increase illegal revenue sources.
Another fraud scheme involved a Cambodia-based group using OpenAI technology to generate fake comments on social media platforms. These messages, which spread on platforms like Facebook and X, were part of a financial scam operation.
The U.S. government has long warned that authoritarian regimes use AI both to control their own populations and to conduct propaganda campaigns against Western countries. OpenAI’s latest action is seen as a significant step in countering such threats.
What are your thoughts on this issue? Feel free to share your opinions in the comments section below.