Chinese Authorities Launch Crackdown on Criminal Networks Exploiting ChatGPT for Sophisticated Fraud
Chinese police target criminal rings using ChatGPT for automated scams, highlighting the rise of AI-powered fraud despite regional bans on the software.
By: AXL Media
Published: Feb 25, 2026, 5:28 AM EST
Source: The information in this article was sourced from The Register

Automated Deception Overcomes Linguistic Barriers
The Ministry of Public Security in China has identified a disturbing trend where criminal syndicates are leveraging OpenAI's ChatGPT to craft highly convincing fraudulent communications. By utilizing the model's advanced natural language capabilities, bad actors are able to produce error-free, persuasive messages that mimic official government notices or legitimate corporate correspondence. According to investigators, this automation allows small criminal groups to scale their operations significantly, targeting thousands of victims simultaneously with personalized content that was previously too time-consuming to generate manually.
Bypassing Geographical Restrictions Through Digital Proxies
Although ChatGPT is not officially available for registration in mainland China, underground networks have established a thriving market for redirected access. These groups utilize virtual private networks, foreign phone numbers, and specialized API proxies to tunnel into the service. According to cybersecurity reports, these "grey market" access points are often bundled with automated scripts designed specifically for criminal use, such as generating malicious code or script-writing for social engineering scams, bypassing the safety filters intended to prevent such applications.
Sophisticated Phishing and Identity Theft Tactics
Authorities have detailed several cases where AI was used to create deceptive "deepfake" textual identities. In these scenarios, the AI generates a consistent and believable backstory for a fraudulent persona, which is then used to build trust with victims over several weeks before soliciting funds. According to law enforcement officials, the level of consistency and emotional intelligence displayed by the AI-generated personas makes it increasingly difficult for the average citizen to distinguish between a genuine interaction and an automated scam, leading to a higher success rate for financial theft.
Categories
Topics
Related Coverage
- Wells Fargo Issues Urgent Warning as Generative AI Erases the Visual Markers of Fraud
- Chinese Judicial Reports Reveal 158 Percent Surge in Cybersecurity Crimes Amid Aggressive Crackdown on Digital Misconduct
- ACM TechBrief Warns of Security and Reliability Risks in Rapidly Rising Vibe Coding Trend
- The False Dichotomy of Cybercom 2.0: Experts Argue New Reforms Must Pave the Way for an Independent Cyber Force