Today’s newsletter is brought to you by International Intrigue, a newsletter delivering daily geopolitical news and insights straight to your inbox. Founded by former diplomats with years of on the ground experience in China, International Intrigue offers a sharp daily update on the most important global stories for less than five minutes a day.
Unsurprisingly, I spend most of my time myopically focused on China — but am also really curious about other stuff happening around the world! International Intrigue helps me keep up to date in an efficient and entertaining way, providing me the context I need on stories like Venezuela’s moves to annex Guyana’s territory, Argentina’s elections, India’s assassination plot in the US, and more.
This piece was authored by Ciel Qi, a researcher specializing in emerging technology, security, and US-China relations.
In a recent article on ChinaTalk, anonymous contributor L-squared illustrated the detrimental impact of China’s stringent AI regulations on Chinese firms: “the scope of regulatory targets is actually wider than expected, and Chinese AI diffusion is being seriously compromised by confused and overbearing regulatory action.” Echoing that article’s sentiment, I put forward three arguments on how China’s regulatory framework could stall its domestic generative AI development:
Its expansive content controls could impair AI model performance,
The excessive responsibility it places on AI providers could overburden them,
And its limitations on providers and bans on generating certain kinds of content could deter users from engaging with Chinese AI models.
The Interim Measures for the Management of Generative Artificial Intelligence Services 生成式人工智能服务管理暂行办法 (“the Interim Measures”) are a key part of China’s generative AI regulatory landscape. Drafted in April of this year, they came into effect in August, spelling out the responsibilities of providers offering generative AI services to the Chinese public. Currently, these providers are predominantly Chinese tech giants like Baidu (the provider of Ernie Bot 文心一言) and Alibaba (the provider of Tongyi Qianwen 通义千问). And the Interim Measures are supported by a recently released technical policy document called the Basic Security Requirements for Generative Artificial Intelligence Service 生成式人工智能服务安全基本要求 (“the Basic Security Requirements”).
True, the Interim Measures and the Basic Security Requirements express the Chinese government’s encouragement for developing generative AI — but a deeper examination reveals their potential to stall that development.
Expansive Content Control Impairs Model Performance
Content control is a central aspect of China’s generative AI regulations. The Interim Measures stipulate that the content generated by generative AI must adhere to the “core values of socialism” 坚持社会主义核心价值观. Content is prohibited if it promotes “the subversion of state authority” 煽动颠覆国家政权 or the “overthrow of the socialist system” 推翻社会主义制度, or which “threatens or compromises national security and interests” 危害国家安全和利益. Further, generative AI must not generate “false or harmful information, or any content that is outlawed by legal and administrative frameworks” 虚假有害信息等法律、行政法规禁止的内容 (Article 4.1).
Given that the Chinese government considers a wide spectrum of content “false or harmful,” adhering to these regulations requires China-based generative AI service providers to censor such content, regardless of the technical feasibility of doing so. Moreover, implementing a censorship mechanism during the training phase — which could be required by the Basic Security Requirements (discussed in the following section) — could significantly narrow the breadth and diversity of the data that a generative AI model is trained on. As the robustness and effectiveness of a model largely depend on a wide and diverse set of training data, a generative AI model enforcing under extensive content regulation is likely to exhibit compromised performance — including, as L-squared also suggested, reduced “helpfulness and honesty.”
Excessive Responsibility Overburdens Generative AI Providers
China’s regulations on generative AI impose a significant burden on providers. A key requirement is for them to file algorithms with the Cyberspace Administration of China or its local branches within ten days of launching their generative AI service. As stipulated in the 2022 Administrative Provisions on Algorithm Recommendation for Internet Information Services 互联网信息服务算法推荐管理规定, the Interim Measures mandate this filing obligation only when a provided service possesses “public opinion attributes or social mobilization capabilities” 舆论属性或者社会动员能力. But since the Chinese government could interpret this condition so broadly, it’s likely that any generative AI provider catering to the public will be obligated to fulfill this requirement. While big Chinese tech companies may find compliance straightforward, smaller entities or startups (which L-squared estimates make up 44.2% of Chinese companies that have filed algorithms) may find the process burdensome. And the need to prepare and submit the required documentation shifts companies’ focus away from other vital tasks such as bug fixing and releasing new iterations, which are often crucial steps post-launch.
The Interim Measures demand — without mentioning specifics — that generative AI providers conduct a security review 安全评估 (Article 17). Even so, insight into the expectations can be gleaned from the recently released draft of the Basic Security Requirements. This document mandates providers to carry out a security assessment of training data and blacklist datasets containing more than five percent of “illegal and harmful information” 违法不良信息. While the Interim Measures require providers to offer necessary training of data annotators (see Article 8), the Basic Security Requirements take a step further by requiring providers to regularly evaluate their annotators and certify those who meet the qualifications (Article 8.1). Moreover, providers whose services extend to minors are required to implement measures that prevent addiction and filter out inappropriate content for minors (Article 7.a.3-4). Before public launch (Article 4) and after major updates of a generative AI service (Article 7.g.2), the Basic Security Requirements also demand a security review to evaluate the security of training data and generated content, with the review findings to be submitted to Chinese authorities.
Even for well-resourced big Chinese tech companies, ensuring compliance with such requirements will incur additional costs, either in terms of human capital or financial resources. For example, data annotation in China is commonly outsourced to third parties, but the requirements necessitate that providers now invest additional resources in training, evaluating, and certifying data annotators in-house. For small- and medium-sized companies, compliance with the regulations could potentially drain vital limited resources.
Potential User Disengagement
China’s framework for regulating generative AI could potentially deter user engagement. While, as noted above, regulations primarily delineate the responsibilities of providers, the Interim Measures also mandate that users of generative AI services must not generate false or harmful information. And service providers still don’t escape here: per Article 14, they are required to report to Chinese authorities if they discover users engaging in illegal activities with their generative AI services.
Even a highly fine-tuned model might inadvertently generate content deemed harmful, especially given the Chinese government’s broad categorization of harmful and illegal information. That knowledge may discourage users from utilizing generative AI services: they might fear accidentally generating information which would put them at risk. Moreover, providers might tighten access to their APIs to preempt any possible misuse. Even before the release of the Interim Measures, some Chinese companies already had a relatively stringent screening process for prospective users in place — so with more companies implementing heightened access restrictions, even more potential users will likely be deterred.
The potential unwillingness of users to utilize domestically provided generative AI services, stemming from content-control concerns or restricted access, might drive them to use non-Chinese generative AI services — which may lead to a dwindling user base for China’s local generative AI services, impacting the revenues of the companies behind them. Although the Chinese government might offset these financial drawbacks through research funding, thus ensuring continued innovation despite suboptimal revenues, China’s domestic generative AI could lag behind international competitors without more public testing and feedback as to how to improve a model.
Conclusion and Considerations
The rapid development of generative AI, coupled with the potential risks posed by this technology, underscores the need for regulations. While various governments are examining and evaluating the most viable regulatory framework for generative AI, China has taken a forward-leaning stance by setting out — and in some instances implementing — generative AI regulations. Nevertheless, by focusing on expansive content control and placing excessive responsibility on service providers, China risks stalling its domestic generative AI development.
To be sure, this slowdown is not inevitable. After all, the Interim Measures are called “interim,” suggesting a possible willingness to amend them down the road. Moreover, as the Basic Security Requirements have just concluded the process of soliciting public opinion, it’s possible they will be moderated in the final policy version (likely facilitated, as L-squared points out, by “pushback from [Chinese] industry”). And even if the original form of both regulations remains unchanged — in which case China’s generative AI is likely to be significantly constrained — China’s regulatory framework could still offer other countries interesting angles to consider in shaping their own regulatory frameworks, such as regulating the data annotation industry, preventing addiction among minors, and enhancing transparency in generative AI services.