OpenAI Bans Malicious AI Accounts in China, North Korea

OpenAI has taken action against users from China and North Korea who were allegedly exploiting its AI technology for malicious activities, including misinformation campaigns and online fraud.

In a recent report, OpenAI highlighted how authoritarian regimes might attempt to leverage AI-powered tools like ChatGPT for surveillance, opinion manipulation, and cyber operations targeting both their citizens and foreign nations.

The company did not disclose the number of accounts banned or the exact timeline of its enforcement actions.

Advertisement

However, it confirmed that AI-driven methods were used to detect and curb these operations.

Among the documented cases, OpenAI detailed several instances of AI misuse. For misinformation campaigns, users instructed ChatGPT to generate anti-US news articles in Spanish, which were later published in mainstream Latin American news outlets under an alleged Chinese company’s byline.

OpenAI Bans Accounts Linked to Malicious AI Use in China, North Korea

Also, AI was used to create fraudulent CVs and online profiles to help alleged North Korean operatives secure jobs in Western firms under false pretences.

Additionally, a Cambodian-based fraud network used ChatGPT to translate and generate content for deceptive activities across social media and communication platforms like X (formerly Twitter) and Facebook.

The U.S. government has repeatedly raised concerns over China’s potential misuse of AI to suppress dissent, spread propaganda, and threaten national security.

With over 400 million weekly active users, OpenAI’s ChatGPT remains the world’s most popular AI chatbot, making it a valuable target for both state-backed and independent bad actors.

Meanwhile, OpenAI is in talks to raise $40 billion, with a potential valuation of $300 billion—a move that could set a new funding record for a private company.

As AI technology advances, so do concerns over its exploitation for misinformation, fraud, and cyber threats, prompting tighter security measures from leading AI firms.

Author

  • Abdulateef Ahmed

    Abdulateef Ahmed, Digital News Editor and; Research Lead, is a self-driven researcher with exceptional editorial skills. He's a literary bon vivant keenly interested in green energy, food systems, mining, macroeconomics, big data, African political economy, and aviation..

Share the Story
Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement