On April 11, 2023, China published its draft regulation on ChatGPT-type artificial intelligence chatbots. ChatGPT is unavailable in China but Alibaba and Baidu are proposing their own versions of similar services. As usual in China, regulations are first published as drafts, and citizens are invited to send comments, but this is mostly propaganda and comments criticizing the essential of the measures are never taken into account.
In various countries of the world, ChatGPT has become extremely fashionable. If you are a college student, it can write an essay for you, and even some journalists use it to write articles. Others have noted that ChatGPT and its AI competitors produce texts that are full of mistakes, and may open their users to security problems. Some countries such as Italy have provisionally banned ChatGPT, although obviously students and others can still use it through a VPN.
The potential danger of ChatGPT-type chatbots could not have escaped the CCP, as there is the risk that artificial intelligence would generate texts critical of the Party. The reaction has been quick, and the State Internet Information Office has drafted “Generative Artificial Intelligence (AI) Services Management Measures.”
Basically, the measures say, to Chinese users AI platforms are only authorized to deliver contents that “should reflect the core values of socialism, shall not contain subversion of the state power, overthrow the socialist system, incite to split the country, undermine national unity, promote terrorism, extremism, ethnic hatred, ethnic discrimination, violence, obscenity, pornography, false information, and disrupt the economic order and the social order.” The list is long, and it means that any criticism of the CCP and promotion of dissent is prohibited.
The artificial intelligence is not a human being but officers of the companies offering these services are certainly human. China has adopted the principle of “the responsibility of the producer for the content generated by the product,” even if no human intervenes in the production of such content.
In practice, that means that in China companies should share with the authorities their algorithms, undergo a security review, and disclose what sources they use and how they use them. They may be required to disclose the names of the clients who use their services to the authorities, and should ban customers who ask questions or request texts not in line with the “core values of socialism” or the interests of the state or the CCP.
They may also want to double-check the contents generated by the artificial intelligence on sensitive subject matters (although they would not do it in other countries), because if the texts produced are deemed to be objectionable, after a first warning, they may be banned from operating their chatbots altogether.
This seems to be another chapter in the CCP’ struggle, whose importance is constantly emphasized by Xi Jinping himself, to control the Internet and technology. The problem is that certain technologies are inherently uncontrollable, and despite warnings that they may be committing a crime, millions of young Chinese use VPNs and try to elude surveillance. AI text-generating chatbots have real problems, but the ideological-police approach of the CCP would as usual just irritate netizens, particularly their younger segments, and may in the end not work.
In the meantime, if you are a student in China Alibaba’s or Baidu’s AI-powered chatbots may write a term paper for you—but it will be one glorifying the Party and Xi Jinping.