Draft EU AI Regulation (Taiwan)

Hannah Kuo and Jaime Cheng[1]

On April 21, 2021, the European Commission released a draft proposal for the Artificial Intelligence Act[2] (hereinafter, the “Draft AIA”), which is a continuation of the strategies proposed in the EU’s White Paper on Artificial Intelligence released in February 2020.  The Draft AIA proposes that the EU should establish a set of human-centric AI regulations to encourage the research and application of safe, trustworthy and ethical AI; thereby developing an ecosystem of trust in AI and promote the development of AI.  The Draft AIA proposed by the European Commission is outlined below:

1. Definition of AI systems

“AI systems” is broadly defined in the Draft AIA. It includes any software that is developed with one or more of the machine learning, logic and knowledge-based or statistical techniques and approaches[3] listed in Annex I of the Draft AIA and that can generate outputs (e.g., content, predictions, recommendations, or decisions influencing the environment they interact with) for a given set of human-defined objectives.

2. Regulated Entities and the Extraterritoriality of the Draft AIA:

The entities regulated by the Draft AIA cover a variety of participants in the AI value chain, including providers, product manufacturers, importers, distributors, users, etc., and the bulk of the obligations are imposed on the “suppliers” and “users” of AI. “Provider” refers to legal or natural person, (including public authorities) who develops (or has developed) an AI system with a view to placing it on the market or putting it into service (whether for payment or free of charge) under its own name or trademark in the EU market. “User” refers to natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.

According to the proposal, the Draft AIA will apply to: (1) providers that place AI systems on the market or puts AI systems into service in the EU, regardless of whether the providers are established in the EU or in a third country; (2) users of AI systems located within the EU; and (3) providers or users of AI systems that are located in a third country, where the output produced by the system is used in the EU.  Therefore, the extraterritorial effect of the Draft AIA not only apply to system providers outside of the EU, but also covers scenario where the AI systems themselves are not available within the EU and the users are not located in EU but only the outputs generated are used within EU.

3. Regulation of AI systems:

The EU follows a risk-based approach and identifies four levels of risk created by the use of AI systems: (1) unacceptable risk (i.e., AI whose use is prohibited), (2) high-risk, (3) limited risk, and (4) minimal risk.   AI system at different risk levels is required to follow different requirements, and the majority of the Draft AIA is dedicated to regulating the use of AI that creates “unacceptable risk” and “high risk.”

(1) Unacceptable risk (Prohibited AI):

The Draft AIA lists four prohibited uses of AI that should be banned altogether because they pose unacceptable risks and violate EU values and fundamental rights:

(a) Subliminal techniques:

AI systems that deploy subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes (or is likely to cause) that person or another person physical or psychological harm.

(b) Exploitation of vulnerabilities of the disadvantaged:

AI systems that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes (or is likely to cause) that person or another person physical or psychological harm, e.g. toys using voice assistance encouraging dangerous behaviour of minors.

(c) Social scoring:

AI systems for public authorities to evaluate or classify the trustworthiness of natural persons based on their social behavior, personal or personality characteristics, resulting in detrimental or unfavourable treatment of certain natural persons or whole groups.

(d) Biometric identification in public places for law enforcement:

Use of remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except where remote biometric identification used for (1) the search for specific potential victims of crime(including missing children), (2) the prevention of threat to the life or physical safety of natural persons or of a terrorist attack, or (3) for the prosecution of a major criminal offence, and the use is subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and personal, and the use is in as far as strictly necessary.

(2) High-risk AI systems:

In addition to the above prohibited AI systems, the use of the following AI technologies that may result in negative impacts on personal safety or fundamental rights are considered as “high-risk” AI systems and subject to strict statutory obligations:

(a) Product or product safety components

High-risk AI systems are those that meet the following two conditions:

● AI systems that are intended to be used as a safety component of a product, or is itself a product, covered by the pre-existing EU legislations listed in Annex II (such as medical devices, machinery, aircraft, vehicles, ships, railroads, children’s toys, etc.), and

● the product whose safety component is the AI system or itself as a product is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product.

(b) AI systems in specific areas:

● Remote biometric identification system;

● Critical infrastructure (such as transportation, water, electricity and gas) safety components;

● Education and vocational training assessment and access determination (such as examination scores);

● Employment, workers management and access to self-employment (e.g. software used for resume classification in a recruitment process);

● Access to essential public and private services (such as credit scores and social aids);

● Law enforcement (such as polygraphs, evaluation of the reliability of evidence, and prediction of the reoccurrence of criminal offence);

● Immigration, asylum, and border control (such as the authenticity verification of travel documents);

● Judicial administration and democratic process (e.g., application of laws to a concrete set of facts).

The statutory mandatory obligations that high-risk AI systems should comply with include:

● Establishment of a risk management system:

For high-risk AI systems, a risk management system should be established and maintained throughout the entire lifecycle of the AI system to identify and analyze risks and implement appropriate risk management measures.

● Data governance:

The data used by the high-risk AI system as training models should be of a certain quality, follow appropriate governance and management practices, and be relevant, representative, free of errors and complete.

● Preparation of technical documentation:

Before a high-risk AI system is placed on the market or put into service, a detailed technical documents should be produced to provide information on the system and its purpose for authorities to evaluate its compliance.

● Record-keeping:

High-risk AI systems shall be designed and developed with the capability of enabling the automatic recording of events (‘logs’). Those logging capabilities shall conform with recognised standards or common specifications.

● Transparency and provision of information to users:

A high-risk AI system shall be designed and developed in such a way that the users are able to interpret the system’s output and use it appropriately, and the system shall be accompanied by instructions for use that include concise, complete, correct and clear information.

● Proper human oversight:

A high-risk AI system shall be designed and developed with appropriate human-machine interfaces tools that can be overseen by a natural person to minimize the risk;

● Accuracy, robustness and cybersecurity:

A high-risk AI system shall be designed and developed with an appropriate level of accuracy, robustness against errors, faults or inconsistencies within and outside the system, and shall ensure the cybersecurity of the system, preventing attacks to the system.

Whether an AI system meets the above requirements is determined by the conformity assessment, which is to be conducted before the AI system is placed on the EU market.  The conformity assessment is partly handled by the existing bodies responsible for performing conformity assessment to product under the existing EU product legislation and partly conducted through internal control checks by the providers, and the conformity assessment on a biometric identification AI system will be carried out by newly designated bodies. High-risk AI systems should bear the CE marking to indicate their conformity with the Draft AIA so that they can move freely within the EU market.  In addition, high-risk AI systems that are not products under existing EU product legislation must be registered with the newly established EU AI database before being placed in the EU market.

Providers shall implement post-market monitoring to ensure that the AI system continues to meet the aforementioned requirements and notify national competent authorities in the event of any serious incident.

(3) AI systems with limited risks:

Limited risk AI systems are AI systems with specific transparency obligations.  AI systems intended to interact with natural persons (e.g., chatbots), emotion recognition system, biometric categorisation system, and deep fakes should be disclosed to the users so that they can decide whether to continue using the software.

(4) AI systems with minimum risks:

Minimal risk AI systems are other AI systems that follow existing regulations without additional obligations.  Most AI systems fall into this risk category, such as AI video games or spam filters, and the Draft AIA does not regulate these types of applications, and providers of these system are encouraged to adhere to voluntary codes of conduct.

4. Governance of AI systems:

In terms of governance, a European Artificial Intelligence Board will be established to issue relevant opinions and guidelines to facilitate the implementation of the Draft AIA once it is passed and promote the development of AI standards.  As the cost of compliance with the Draft AIA is considerable for AI system operators, the Draft AIA also encourages the nations to establish AI regulatory sandboxes (i.e. a controlled environment to test innovative technologies for a given period of time) and provide supports to SMEs and start-ups.

5. Penalties for violation of the Draft AIA:

The proposed fines for non-compliance in the Draft AIA are significant, i.e., a fine of up to €30 million or 6% of worldwide annual turnover for the preceding financial year (whichever is higher) applies for using prohibited AI systems or failing to comply with data governance requirements; up to €20 million or 4% of worldwide annual turnover applies for failing to comply with other requirements; and up to €10 million or 2% of global annual turnover for providing incorrect, incomplete or misleading information to a notified bodies or competent authorities.

6. Conclusions: Impact of the Draft AIA on operators in Taiwan

The Draft AIA is still in the draft phrase, and it is expected that the legislation process will take 1 to 2 years to complete.  The Draft AIA has also proposed a transitional period of 24 months; thus, it is expected that the final provisions will not be formally implemented until 3 to 5 years later.  After the Draft AIA was proposed, there are many different opinions on the contents of the draft. Some people think that the definition of AI in the Draft AIA is too broad, the areas of high-risk AI systems are too extensive and vague, and there are too many items required to be disclosed for registration, which puts pressure on the research and development and application of AI.  On the other hand, many private parties are still concerned about the use of biometric identification technologies, and although the Draft AIA has increased the regulation on biometric identification, it also provides for many exceptions; therefore, the level of supervision may not be as strong as expected.  As there are still much to debate, the future legislative process of the Draft AIA is expected to go through lobbying and debates by all sectors before the final version of the bill is passed.

Similar to the EU General Data Protection Regulation (GDPR), the Draft AIA will be promulgated in the form of an EU regulation, which will be directly applicable to EU members, and the extraterritorial effects of the Draft AIA will be extended to AI system products or service providers outside the EU, even if the AI system suppliers and users are both located outside the EU, and only the outputs generated by the systems are used within the EU.  Moreover, as the Draft AIA integrates the existing product-related legislations in EU, it not only imposes obligations on the provider of AI product or service, but also to the manufactures of certain products..  Therefore, companies in Taiwan who wishes to put AI system in the EU market in their own brand, or the outputs of their AI system may be used in EU, or manufacture certain regulated product such as medical devices, machinery and children’s toys with AI systems, should pay close attention to the legislative developments of the Draft AIA and make preparations before it takes effect in order to reduce the risk and later costs of legal compliance.

[1] This article only represents the personal opinions of the authors and do not represent the position of this law firm.

[2] Proposal for a Regulation laying down harmonised rules on artificial intelligence

https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence

[3] AI development techniques and approaches listed in Annex I of the AI Regulation include: (1) machine learning, including supervised, unsupervised, reinforcement learning, and deep learning approaches; (2) logic and knowledge-based approaches, including knowledge representation, inductive logic programming, knowledge base, inference and deductive engine, symbolic reasoning and expert system; (3) statistical approaches, Bayesian estimation, search and optimization approaches. The AI Regulations also authorizes the European Artificial Intelligence Board, which will be established in the future, to update the AI development techniques and approaches listed in Annex I.