
OpenAI says a ChatGPT user linked to Chinese law enforcement tried to pull the chatbot into an online smear campaign against Japan's prime minister, Sanae Takaichi, and got turned down. The San Francisco company says that brief exchange ended up exposing a much larger, highly organized influence operation and led it to ban the account involved.
OpenAI's new threat report
The episode appears in OpenAI's latest threat-report update, published yesterday, which walks through real-world case studies of people trying to twist its generative models toward fraud, coercion and political manipulation, according to Business Insider.
ChatGPT refused and the account was banned
In this case, a user that OpenAI says was associated with Chinese law enforcement asked ChatGPT to draft and then refine a smear campaign targeting Takaichi. The model declined to help. OpenAI says it then traced additional activity connected to the same project and ultimately shut down the account, as reported by Bloomberg.
Report: 'large-scale' and 'industrialized'
OpenAI's write-up describes the broader operation as large-scale, resource-intensive and sustained, with a playbook that combined human operators, thousands of fake accounts and locally deployed AI models. Ben Nimmo, the principal investigator on OpenAI's investigations team, told reporters the pattern looked industrialized and geared toward silencing critics, per OpenAI.
How investigators traced the campaign
According to OpenAI, the prompts in question laid out specific tactics: posting and amplifying negative comments, sending bogus "foreign resident" complaints to Japanese officials and seeding hashtags and images across platforms. Investigators say they were able to link those instructions to content that later appeared on X, Pixiv and Blogspot in late October 2025. When ChatGPT refused to participate, the actor appeared to pivot to locally hosted models and other tools instead, Axios reports.
Other scams and impersonations flagged
The Japan-focused operation is only one example in the update. OpenAI also details romance-scam networks and clusters of accounts posing as law firms or U.S. officials to defraud victims, including requests to generate fake New York State Bar membership images. Those cases, and their mechanics, are laid out in the company's report and follow-on coverage, per Reuters.
Why it matters for San Francisco and the industry
The incident highlights a growing headache for San Francisco-based AI firms: systems that are broadly useful for everyday users can also be prodded and tested for coercion, fraud and even transnational repression. OpenAI says its response now leans heavily on public threat reporting, rapid account bans and cooperation with other platforms to blunt evolving misuse, according to Bloomberg.









