Moderation: Klaus Wiedemann
To eliminate AI content risks, the Chinese government has embraced a zero-risk policy, culminating in a sweeping list of prohibited outputs outlined in Article 4 of its Interim Measures. Jiawei critiques this approach, placing large language models (LLMs) within the broader information ecosystem and exposing fundamental flaws in the policy design. First, the pursuit of fully de-risking LLMs is unattainable. LLMs merely replicate the problems that have already existed in the information marketplace. As the whole information ecosystem is far from perfect, expecting LLMs to function as a zero-risk engine is unrealistic. Additionally, Jiawei argues that the Chinese zero-risk approach is unnecessary. An AI chatbot’s output competes not only with other chatbots but also with other information outlets. Market forces, therefore, have great potential to incentivize AI companies to improve their output and derisk their systems automatically, voluntarily, and continuously. Jiawei uses Article 4 of Chinese Interim Measures as a case study to further illustrate how to design a scientific framework to govern AI risks and avoid setting unrealistic standards and imposing onerous duties on AI companies.