July 2023
Of course the following summary is not legal advice, but it may help you decide if you need to talk to your lawyers and update your AI Governance framework. (AI Governance refers to the policies, processes and tools that support responsible and effective use of AI and AI-enabled products throughout their lifecycle). Please note that regulatory frameworks are developing rapidly and you should always seek up to date legal advice on your obligations in each jurisdiction.
***************************************************************
The PRC has enacted several laws and regulations relevant to AI in the last few years, including its Cybersecurity Law (2017), Security Assessment Requirements for social media services (2018), Data Security Law (2021), Personal Information Protection Law (2021) (resembling the GDPR), and a mandatory Registration System (2022) for recommendation algorithms.
In Jan 2023, it issued specific regulations governing deep synthesis technologies (aka “deep fake” technologies). Amongst other things, they provide that new deep synthesis products require a governmental security assessment before being approved for release. AI-generated content must carry clear labels such as watermarks indicating its provenance; and users of the technology must register for accounts with their real names, so their activity is traceable. In addition to the usual end-user protections, the deep synthesis regulations specify that AI may not be used to generate output that endangers national security, disturbs economic or social order, or harms China’s image.
Here we review the most recent regulation published by the Cyberspace Administration of China (CAC), the Interim Measures for the Management of Generative Artificial Intelligence Services (GAI Measures).
Process: A draft of the GAI Measures was issued for public comment on 11 April 2023. Submissions closed 1 month later in May, and the final GAI Measures were published on 13 July 2023. They will take effect 11 August 2023. Compared to the initial draft, the final GAI Measures appear to have relaxed the obligations imposed on developers of generative AI, in order to preserve innovation.
Overview: Providers of generative AI services are required to abide by laws and administrative regulations, respect social morality and ethics, and comply with the following 5 principles: 1) adhere to the socialist core values; 2) take effective measures to prevent discrimination when designing and training algorithms and models and providing generative AI services; 3) respect intellectual property rights and business ethics; 4) respect the legitimate rights and interests of others, which means not endangering the physical and mental health of others, and respecting their portrait rights, reputation rights, honour rights, privacy rights, and personal information rights; and 5) take effective measures to improve the transparency of generative AI services and improve the accuracy and reliability of generated content.
Governance provisions encourage the independent innovation of generative AI algorithms, frameworks, chips, and supporting software platforms, and refer to international cooperation in the formulation of international rules related to generative AI (Article 6).
Providers have a number of specific obligations (Articles 9-15). Key requirements include meeting network security requirements, establishing service agreements with users, monitoring user content for illegality, preventing minors from excessive reliance on generative AI, protecting user information and correcting or deleting it on request, marking AI-generated content in accordance with deep synthesis regulations, and establishing a complaints mechanism for users.
Who will be impacted: The GAI Measures apply to actors using generative AI to provide services to the public within the PRC. Organisations that do not offer services to the public are exempt (Article 2). This definition captures services originating outside the PRC provided to the Chinese public, but query whether it captures developers as well as providers.
How it will be enforced: Existing industry authorities are charged with the supervision and inspection of generative AI services within their fields of competence. They are required to formulate classifications, and issue supervisory rules or guidelines relevant to the context of their industry (Article 16). Providers are expected to cooperate and explain the source, scale, type and labelling rules of training data and the algorithm as required, and provide necessary technical and data support and assistance (Article 19). If generative AI services originating from outside the PRC are deemed to be non-compliant with the GAI Measures, the relevant institutions are authorised to take technical measures “and other necessary measures” to deal with them (Article 20).
Penalties are levied in accordance with existing laws. If a breach of the GAI Measures constitutes a violation of public security management, it shall be punished according to law; if it constitutes a crime, it shall be investigated for criminal responsibility (Article 21).
Timing: The GAI Measures take effect 15 August 2023.
Guidance: None available at this stage.
Our comment: In some ways China’s approach is similar to that of the EU: hard law, designed to bake values into technology during its development, before it gets released to the public. It goes somewhat further in emphasising the need for political alignment of generative AI with core Socialist values and national security. In this regard, the GAI Measures may form the basis for banning services using Western models – but they increase the compliance burden of Chinese developers also. The latter must now take extra care selecting the data to train their algorithms and ensuring political alignment of models through addition of extra guardrails and training techniques if they want to be approved for release to the public.
In other ways, China’s approach resembles the UK – it leverages existing regulators for enforcement and introduces sectoral contextualisation.
The speed with which China was able to finalise its GAI Measures is impressive and was made possible by leveraging the foundation of laws and processes (security assessments, registration systems) already in existence. However commentators are uncertain of the capacity of the CAC to implement and enforce these regulations.
Thanks for checking out our business articles. If you want to learn more, feel free to reach out to Red Marble AI. You can click on the "Let's Talk" button on our website or email Bronwyn, our Ai Govern expert at b.ross@redmarble.ai.
We appreciate your interest and look forward to sharing more with you!