Suspected Chinese government operatives asked ChatGPT to help write proposal for a tool to conduct large-scale surveillance and to help promote another that allegedly scans social media accounts for “extremist speech,” ChatGPT-maker OpenAI said in a report published Tuesday.
The report sounds the alarm about how a highly coveted artificial intelligence technology can be used to try to make repression more efficient and provides “a rare snapshot into the broader world of authoritarian abuses of AI,” OpenAI said.
The US and China are in an open contest for supremacy in AI technology, each investing billions of dollars in new capabilities. But the new report shows how AI is often used by suspected state actors to carry out relatively mundane tasks, like crunching data or making language more polished, rather any startling new technological achievement.
“There’s a push within the People’s Republic of China to get better at using artificial intelligence for large-scale things like surveillance and monitoring,” Ben Nimmo, principal investigator at OpenAI, told CNN. “It’s not last year that the Chinese Communist Party started surveilling its own population. But now they’ve heard of AI and they’re thinking, oh maybe we can use this to get a little bit better.”
In one case, a ChatGPT user “likely connected to a [Chinese] government entity” asked the AI model to help write a proposal for a tool that analyzes the travel movements and police records of the Uyghur minority and other “high-risk” people, according to the OpenAI report. The US State Department in the first Trump administration accused the Chinese government of genocide and crimes against humanity against Uyghur Muslims, a charge that Beijing vehemently denies.
Another Chinese-speaking user asked ChatGPT for help designing “promotional materials” for a tool that purportedly scans X, Facebook and other social media platforms for political and religious content, the report said. OpenAI said it banned both users.
AI is one of the most high-stakes areas of competition between the US and China, the world’s two superpowers. Chinese firm DeepSeek alarmed US officials and investors in January when it presented a ChatGPT-like AI model called R1, which has all the familiar abilities but operates at a fraction of the cost of OpenAI’s model. That same month, President Donald Trump touted a plan by private firms to invest up to $500 million in AI infrastructure.
Asked about OpenAI’s findings, Liu Pengyu, a spokesperson for the Chinese Embassy in Washington, DC, said: “We oppose groundless attacks and slanders against China.”
China is “rapidly building an AI governance system with distinct national characteristics,” Liu’s statement continued. “This approach emphasizes a balance between development and security, featuring innovation, security and inclusiveness. The government has introduced major policy plans and ethical guidelines, as well as laws and regulations on algorithmic services, generative AI, and data security.”
The OpenAI report includes several other examples of just how commonplace AI is in the daily operations of state-backed and criminal hackers, along with other scammers. Suspected Russian, North Korean and Chinese hackers have all used ChatGPT to perform tasks like refining their coding or make the phishing links they send to targets more plausible.
One way state actors are using AI is to improve in areas where they had weaknesses in the past. For instance, Chinese and Russian state actors have often struggled to avoid basic language errors in influence operations on social media.
“Adversaries are using AI to refine existing tradecraft, not to invent new kinds of cyberattacks,” Michael Flossman, another security expert with OpenAI, told reporters.
Meanwhile, scammers very likely based in the Southeast Asian country of Myanmar have used OpenAI’s models for a range of business tasks, from managing financial accounts to researching criminal penalties for online scams, according to the company.
But an increasing number of would-be victims are using ChatGPT to spot scams before they are victimized. OpenAI estimates that ChatGPT is “being used to identify scams up to three times more often than it is being used for scams.”
CNN asked OpenAI if it was aware of US military or intelligence agencies using ChatGPT for hacking operations. The company did not directly answer the question, instead referring CNN to OpenAI’s policy of using AI in support of democracy.
US Cyber Command, the military’s offensive and defensive cyber unit, has made clear that it will use AI tools to support its mission. An “AI roadmap” approved by the command pledges to “accelerate adoption and scale capabilities” in artificial intelligence, according to a summary of the roadmap the command provided to CNN.
Cyber Command is still exploring how to use AI in offensive operations, including how to use it to build capabilities to exploit software vulnerabilities in equipment used by foreign targets, former command officials told CNN.
For more CNN news and newsletters create an account at CNN.com