On May 7, police in Gansu province announced on WeChat that they had arrested a suspect named Hong for using ChatGPT to create fake news about a train accident that killed 9 people in the province. This is the first arrest in China related to a ChatGPT user spreading fake news.
Hong’s behavior came to light after fake news about the accident was posted on April 25 on 21 Baijiahao social media accounts, which garnered over 15,000 views. Hong claimed to have used sensational stories to create content in ChatGPT, used different accounts to bypass censorship, and then shared the fake news to attract more views.
Although the use of ChatGPT is banned in China, some users still access it through VPNs. Hong was arrested on charges of “disturbing order” and faces up to five years in prison.
The Chinese government has been regulating the use of AI, including ChatGPT, and issued regulations in January that govern “deep synthesis technology.” These regulations require users’ consent for the use of their images in deep synthesis technology, prohibit the use of super AIs such as ChatGPT to spread fake news, and require real user identity verification for deepfake services.
China is also supporting domestic companies in developing their own AI models. Baidu announced the Ernie Bot chatbot on March 16, and Alibaba introduced AI Tongyi Qianwen, similar to ChatGPT, on April 7.
However, analysts believe that China is unlikely to win the super AI race due to its lack of experience, technical expertise, and US restrictions on AI chips. Research firm Third Bridge estimates that it may take another 2-3 years for China to develop an AI that has 80% of the functionality of ChatGPT, according to CNN.