April 2026

Compliance with Personal Data Regulations in Enterprise AI Systems: Recent Regulatory Developments and Supervisory Trends (Taiwan)

With the rapid development of generative artificial intelligence (AI), enterprises have increasingly integrated AI into a wide range of operational scenarios. Given that AI systems heavily rely on data and often involve the collection, analysis, and reuse of personal data, their deployment is inevitably subject to personal data protection regulations. Compared with traditional data processing models, AI is characterized by repeated data use and automated decision-making, posing new challenges to existing data protection compliance frameworks. This article analyzes personal data risks in AI systems and reviews recent regulatory developments and key compliance considerations for enterprises.

I. Personal Data Risks in AI Systems

Understanding personal data risks in AI systems requires a lifecycle-based approach.

First, in the data collection stage, enterprises may obtain data from existing databases, user inputs, or third-party sources. At this stage, it is essential to ensure that the purpose of collection is specific and clearly defined, and that the required notice obligations have been fulfilled. If the original purpose of collection does not encompass AI training, subsequent reuse may give rise to compliance risks.

Second, in the model training and data utilization stage, AI systems typically rely on the reuse of existing data. The key issue is whether such use remains within the scope of the original purpose and whether the enterprise has an appropriate legal basis (such as data subject consent or legitimate interest). This issue is particularly significant in the context of generative AI training.

Finally, in the model output and application stage, AI systems may exhibit memorization effects, generating outputs that contain personal data or even infer sensitive information. In certain cases, such outputs may enable the identification of specific individuals, potentially leading to the characterization of AI system operations as ongoing personal data processing.

To address these risks, regulatory authorities have issued practical guidance. For example, the Commission Nationale de l'Informatique et des Libertés (CNIL) emphasizes a “lifecycle compliance” approach for AI systems and the implementation of privacy by design. This includes clearly defining processing purposes at an early stage, identifying appropriate legal bases (such as balancing tests for legitimate interests), ensuring data minimization and transparency, conducting Data Protection Impact Assessments (DPIAs) where necessary, and safeguarding data subject rights such as access, rectification, and erasure [1] .

II. EU Digital Omnibus: Regulatory Coordination and Rebalancing

The European Union has established a comprehensive framework for digital governance. At its core is the General Data Protection Regulation (GDPR), complemented by the Data Governance Act (DGA), the Data Act, and the Artificial Intelligence Act (AI Act), which address data sharing, data access and use, and AI risk management, respectively.

However, as these regulations are progressively implemented, overlapping scopes, duplicative obligations, and regulatory conflicts have become increasingly apparent. In response, the European Commission released the “Digital Package” [2] in November 2025, introducing proposals including the “Digital Omnibus” and “AI Omnibus.” These initiatives aim to streamline and align existing regulatory frameworks through horizontal amendments and simplification measures, with the goals of reducing compliance burdens, enhancing regulatory consistency, and promoting innovation.

Key policy directions include clarifying the relationships among different regulations, reducing duplicative obligations, and reassessing certain core concepts, such as the scope of pseudonymized data, to address the growing demand for data usability in AI development. At the same time, proposals to ease certain data use conditions and optimize mechanisms for exercising data subject rights are under discussion. While such measures may facilitate compliance, they also raise concerns about potentially weakening existing data protection standards.

Overall, the Digital Omnibus reflects a transition in EU digital regulation from system-building to regulatory coordination and rebalancing, requiring enterprises to navigate not only individual regulations but also their interactions.

III. Regulatory Developments in Taiwan: The AI Basic Act

In Taiwan, the Artificial Intelligence Basic Act was promulgated on January 14, 2025, establishing a foundational framework for AI governance. The Act adopts a risk-based approach and sets out key principles, including privacy and data protection, transparency, fairness, and accountability, aligning closely with the spirit of the EU AI Act. It also emphasizes a human-centric approach to AI, aiming to prevent adverse impacts on individual rights.

Although the Act currently provides high-level principles and awaits further implementing regulations, it clearly signals future regulatory directions, including risk classification, allocation of responsibilities for high-risk AI, and the development of supervisory mechanisms. Accordingly, enterprises deploying AI systems in Taiwan must not only comply with existing personal data laws but also proactively establish governance frameworks that anticipate future regulatory requirements.

IV. Corporate Practice: Key Measures for AI Compliance

In light of the above regulatory trends, enterprises should establish concrete and operational compliance mechanisms from both data governance and system design perspectives.

First, enterprises should conduct comprehensive data mapping and classification to identify the nature, source, and personal data implications of the data used in AI systems, and to determine the applicable legal framework. They should also clearly define the purposes of AI use and review existing privacy policies to ensure coverage of relevant data processing activities.

Second, during data utilization and model training, enterprises must carefully assess appropriate legal bases, such as whether legitimate interest can justify AI training, and conduct the necessary balancing tests. Privacy by design should be embedded into system development, incorporating data minimization, access controls, and information security measures from the outset.

Additionally, when using external AI services or outsourcing model training, enterprises should clearly allocate roles and responsibilities through contractual arrangements and ensure that vendors comply with applicable legal and cybersecurity requirements.

Overall, enterprises should establish cross-functional AI governance structures involving legal, cybersecurity, and technical teams to ensure consistent and verifiable compliance throughout the AI system lifecycle.

V. Conclusion

In conclusion, AI regulation is rapidly evolving from a single-regulation model centered on data protection toward an integrated governance framework. The key challenge for enterprises is no longer compliance with individual regulations, but the ability to maintain consistent and verifiable compliance across multiple regulatory regimes. AI development is not only a technological competition but also a competition in governance capability. An enterprise’s competitiveness in AI will increasingly depend on its overall capacity for regulatory compliance and governance.
[1] https://www.cnil.fr/en/ai-system-development-cnils-recommendations-to-comply-gdpr
[2] https://digital-strategy.ec.europa.eu/en/faqs/digital-package

The contents of all materials (Content) available on the website belong to and remain with Lee, Tsai & Partners.  All rights are reserved by Lee, Tsai & Partners, and the Content may not be reproduced, downloaded, disseminated, published, or transferred in any form or by any means, except with the prior permission of Lee, Tsai & Partners.  The Content is for informational purposes only and is not offered as legal or professional advice on any particular issue or case.  The Content may not reflect the most current legal and regulatory developments.

Lee, Tsai & Partners and the editors do not guarantee the accuracy of the Content and expressly disclaim any and all liability to any person in respect of the consequences of anything done or permitted to be done or omitted to be done wholly or partly in reliance upon the whole or any part of the Content. The contributing authors’ opinions do not represent the position of Lee, Tsai & Partners. If the reader has any suggestions or questions, please do not hesitate to contact Lee, Tsai & Partners.