December 2025
Compliance Guidelines for Artificial Intelligence Technology Applications (I) – Overview of AI Compliance Framework (Mainland China)
In recent years, the application of artificial intelligence (AI) technology has developed rapidly around the world, transforming human life and work at an unprecedented speed, breadth, and depth. It has also had a significant and far-reaching impact on global economic development, technological innovation, public welfare, and the international political landscape. In China, with the implementation of the “State Council’s Opinions on Deepening the Implementation of the ‘AI+’ Action,” the country’s AI industry has reached a new level. According to reports, China has established a comprehensive AI industry system covering the foundational layer, framework layer, model layer, and application layer. The industry has exceeded RMB 900 billion in scale, with over 6,000 enterprises, placing China among the global leaders and highlighting its growing competitive advantages.
[1]
However, it is important to note that the path to secure governance in the deployment of AI technologies is not yet clearly defined. For instance, there is still no unified standard in judicial practice regarding what constitutes infringement or illegal behavior in the use of AI technologies. Many companies also lack a clear understanding of compliance requirements. Therefore, it is necessary to remind industry practitioners of the legal risks involved, helping them navigate the wave of AI development more steadily and sustainably.
Overall, the application of AI technology involves multiple compliance challenges. Legal risks may arise at various stages, including algorithm design, model usage and training, data processing, and content generation. This series of articles aims to explore these aspects in depth, hoping to provide useful insights for practitioners.
In terms of algorithms, relevant laws and regulations include the Cybersecurity Law of the People’s Republic of China, the Administrative Measures for Internet Information Services, the Guiding Opinions on Strengthening the Comprehensive Governance of Algorithms Related to Internet Information Services, the Administrative Provisions on Algorithmic Recommendation for Internet Information Services, and the Administrative Provisions on Deep Synthesis of Internet Information Services. From a compliance perspective, service providers using algorithmic recommendations must meet at least three obligations: 1) assume primary responsibility for security; 2) manage information security; and 3) protect user rights. These points will be elaborated on in future articles.
Regarding data protection, key laws include the Data Security Law, the Personal Information Protection Law, and the Interim Measures for the Administration of Generative Artificial Intelligence Services. In model training, current AI applications may involve data processing, which further includes data collection and transmission. In terms of data collection, issues such as the legality of data sources and whether proper consent has been obtained are critical considerations for AI compliance. These topics will also be explored in subsequent articles.
As for AI-generated content, the main legal issues currently involve whether such content is eligible for copyright protection and whether it may constitute infringement. Additionally, there is the question of whether AI-generated content is lawful. This requires examining both the legality of the input data and the lawfulness of the output content. In China, relevant laws include the Administrative Measures for Generative AI Services and the Basic Security Requirements for Generative AI Services, along with references to the Copyright Law, the Anti-Unfair Competition Law, and the Civil Code. These issues will be discussed in later articles.
From the perspective of legal liability, violations involving AI may result in civil, administrative, or criminal liability. For example, the Shanghai Cyberspace Administration recently launched the “Liangjian Pujiang·2025” enforcement campaign, targeting AI misuse as a key focus of annual governance. During the campaign, 54 non-compliant apps were removed from app stores, 26 websites were inspected, 3 websites that refused to rectify violations were penalized, and 5 first-time offenders were instructed to remove non-compliant features and complete registration procedures. [2]
In conclusion, while companies should seize opportunities in development, they must also pay close attention to associated legal risks. Although the application of AI technology presents new challenges, the legal relationships involved can still be interpreted within the existing legal framework. Therefore, companies must not overlook compliance issues in their pursuit of rapid growth.
[1] 15th Five-Year Plan Series | Wei Kai of CAICT: Embarking on a New Journey under the 15th Five-Year Plan and Writing a New Chapter for AI Development
https://mp.weixin.qq.com/s/nPXUAi9Cdkf8wN2HdfcipA
[2] “Liangjian Pujiang” | Cracking Down on Misuse, Safeguarding Development: Shanghai Cyberspace Administration Launches Special Enforcement Campaign Against “AI Abuse.”
https://mp.weixin.qq.com/s/rO8DnpkrDlCqVyigrM0CAg
Overall, the application of AI technology involves multiple compliance challenges. Legal risks may arise at various stages, including algorithm design, model usage and training, data processing, and content generation. This series of articles aims to explore these aspects in depth, hoping to provide useful insights for practitioners.
In terms of algorithms, relevant laws and regulations include the Cybersecurity Law of the People’s Republic of China, the Administrative Measures for Internet Information Services, the Guiding Opinions on Strengthening the Comprehensive Governance of Algorithms Related to Internet Information Services, the Administrative Provisions on Algorithmic Recommendation for Internet Information Services, and the Administrative Provisions on Deep Synthesis of Internet Information Services. From a compliance perspective, service providers using algorithmic recommendations must meet at least three obligations: 1) assume primary responsibility for security; 2) manage information security; and 3) protect user rights. These points will be elaborated on in future articles.
Regarding data protection, key laws include the Data Security Law, the Personal Information Protection Law, and the Interim Measures for the Administration of Generative Artificial Intelligence Services. In model training, current AI applications may involve data processing, which further includes data collection and transmission. In terms of data collection, issues such as the legality of data sources and whether proper consent has been obtained are critical considerations for AI compliance. These topics will also be explored in subsequent articles.
As for AI-generated content, the main legal issues currently involve whether such content is eligible for copyright protection and whether it may constitute infringement. Additionally, there is the question of whether AI-generated content is lawful. This requires examining both the legality of the input data and the lawfulness of the output content. In China, relevant laws include the Administrative Measures for Generative AI Services and the Basic Security Requirements for Generative AI Services, along with references to the Copyright Law, the Anti-Unfair Competition Law, and the Civil Code. These issues will be discussed in later articles.
From the perspective of legal liability, violations involving AI may result in civil, administrative, or criminal liability. For example, the Shanghai Cyberspace Administration recently launched the “Liangjian Pujiang·2025” enforcement campaign, targeting AI misuse as a key focus of annual governance. During the campaign, 54 non-compliant apps were removed from app stores, 26 websites were inspected, 3 websites that refused to rectify violations were penalized, and 5 first-time offenders were instructed to remove non-compliant features and complete registration procedures. [2]
In conclusion, while companies should seize opportunities in development, they must also pay close attention to associated legal risks. Although the application of AI technology presents new challenges, the legal relationships involved can still be interpreted within the existing legal framework. Therefore, companies must not overlook compliance issues in their pursuit of rapid growth.
[1] 15th Five-Year Plan Series | Wei Kai of CAICT: Embarking on a New Journey under the 15th Five-Year Plan and Writing a New Chapter for AI Development
https://mp.weixin.qq.com/s/nPXUAi9Cdkf8wN2HdfcipA
[2] “Liangjian Pujiang” | Cracking Down on Misuse, Safeguarding Development: Shanghai Cyberspace Administration Launches Special Enforcement Campaign Against “AI Abuse.”
https://mp.weixin.qq.com/s/rO8DnpkrDlCqVyigrM0CAg
The contents of all newsletters of Shanghai Lee, Tsai & Partners (Content) available on the webpage belong to and remain with Shanghai Lee, Tsai & Partners. All rights are reserved by Shanghai Lee, Tsai & Partners, and the Content may not be reproduced, downloaded, disseminated, published, or transferred in any form or by any means, except with the prior permission of Shanghai Lee, Tsai & Partners.
The Content is for informational purposes only and is not offered as legal or professional advice on any particular issue or case. The Content may not reflect the most current legal and regulatory developments. Shanghai Lee, Tsai & Partners and the editors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The contributing authors' opinions do not represent the position of Shanghai Lee, Tsai & Partners. If the reader has any suggestions or questions, please do not hesitate to contact Shanghai Lee, Tsai & Partners.


