On July 12, the European Union's Artificial Intelligence Act (AI Act) has officially published in the Official Journal of the European Union, marking the completion of the legislative process for the AI Act, which will become EU law and come into effect on August 2. The AI Act is the first comprehensive legal framework globally for regulating the development and use of artificial intelligence (AI). It employs innovative legislative techniques, maintaining the EU's tradition of forward-looking regulation, and is expected to play a leading role in global AI governance, promoting the regulation and healthy development of the AI industry.
This article will introduce the main content of the EU AI Act and analyze how to respond to it when expanding to E.U.
Main Content
I. Scope of Application
- Applicable Subjects The EU AI Act applies to various entities in the AI system value chain, including AI system providers, deployers, importers, distributors, product manufacturers who place AI systems on the market or into use with their products, authorized representatives of providers without a presence in the EU, and individuals affected within the EU.
- Geographical Scope The effectiveness of the EU AI Act extends beyond entities within the EU. Providers located outside the EU are also subject to the Act if they place AI systems on the EU market or use general-purpose AI models within the EU. If the output of an AI system is used in the EU, providers and deployers outside the EU are also governed by the Act.
- Applicable AI Systems An AI system as defined by the EU AI Act is a machine system that can operate autonomously to varying degrees, may exhibit adaptability after deployment, and infers how to generate outputs that may affect real or virtual environments based on inputs, such as predictions, content, suggestions, or decisions.
- AI systems specifically intended for military, defense, or national security purposes.
- AI systems or models developed and used for scientific research and development purposes, including their output.
- AI systems that are not yet on the market or in use and are still in the research, testing, or development phase, although testing under real-world conditions is still subject to the Act.
- Low-risk AI systems released under free open-source licenses.
However, the EU AI Act does not apply to:
II. Risk Classification of AI Systems
The EU AI Act uses a risk-based regulatory approach, categorizing AI system risks into four levels: unacceptable risk, high risk, specific transparency risk, and minimal risk. The higher the risk, the stricter the regulation. Most AI systems fall into lower risk categories but may still be required to fulfill specific obligations as per the Act.
1. AI Systems with Unacceptable Risks
The EU AI Act strictly prohibits AI systems that present unacceptable risks, mainly those infringing on fundamental human rights or societal ethics. Specifically:
- Exploiting Human Subconscious: The Act prohibits AI systems from manipulating the human subconscious in ways that are undetectable or using manipulative or deceptive methods to impair informed decision-making, thereby distorting behavior or causing or potentially causing significant harm.
- Exploiting Human Vulnerabilities: The Act prohibits AI systems from exploiting human vulnerabilities such as age, physical or mental condition, or specific socio-economic conditions to distort behavior and cause or potentially cause significant harm.
- Assessment Based on Social Behavior and Personal Characteristics: The Act prohibits AI systems from evaluating individuals based on social behavior or known, inferred, or predicted personal characteristics and treating them unfairly or disproportionately, unless the evaluation is connected to the context of data collection.
- Criminal Risk Assessment of Individuals: The Act prohibits AI systems from predicting the criminal risk of individuals solely based on personal profiles or characteristics unless a human has made the assessment based on objective, verifiable facts related to criminal activities and the AI system only supports the conclusion.
- Facial Recognition Database Creation or Expansion: The Act prohibits AI systems from scraping facial images from the internet or surveillance footage indiscriminately for the purpose of creating or expanding facial recognition databases.
- Emotion Analysis: The Act prohibits AI systems from inferring human emotions in workplaces or educational institutions, except for medical or safety purposes.
- Classification Based on Biometric Information: The Act prohibits AI systems from classifying individuals based on biometric data to infer race, political opinions, union membership, religion or philosophical beliefs, sexual life or orientation, unless in law enforcement contexts involving legally obtained biometric data sets.
- Real-time Remote Identification for Law Enforcement: The Act generally prohibits the use of real-time remote identification systems for law enforcement purposes in public places, with exceptions for cases such as locating missing persons.
2. High-Risk AI Systems
- Identification of High-Risk AI Systems The EU AI Act identifies high-risk AI systems as those that may endanger individuals' health, safety, or fundamental rights. These are divided into two categories:
- Category One: AI systems that are safety components of products regulated by EU laws listed in Annex I, or as products themselves, which require third-party conformity assessments according to EU legislation. These products include machinery, toys, elevators, personal protective equipment, etc.
- Category Two: AI systems identified in Annex III of the Act, including those in the following areas:
- Biometrics: Biometric systems not prohibited by EU or member state laws, including remote biometric systems (excluding those solely for identifying specific individuals), AI systems for inferring sensitive or protected information and classifying biometrics, and AI systems for emotion recognition.
- Critical Infrastructure: AI systems used as safety components in critical digital infrastructure, road traffic, water supply, gas supply, heating supply, and electricity supply management and operations.
- Education and Vocational Training: AI systems used for admissions or allocation to educational and vocational training institutions, assessing learning outcomes, educational levels, and monitoring examination irregularities.
- Employment: AI systems used in recruitment processes, particularly for targeted job advertisements, resume screening, and candidate assessment; affecting working conditions, promotions, or termination of employment, task allocation based on performance or characteristics, or monitoring and evaluating employee performance.
- Basic Welfare: AI systems for assessing individuals' creditworthiness, excluding financial fraud detection; risk assessment and pricing for life and health insurance; assessing individuals' eligibility for welfare by public authorities; assessing emergency calls and determining emergency response priorities for police, fire, medical, etc.
- Law Enforcement: AI systems used in criminal investigations, lie detection, etc.
- Immigration, Asylum, and Border Control: AI systems used for assessing immigration risks.
- Judiciary and Democracy: AI systems used for fact-finding, legal interpretation, and influencing electoral behavior.
- When used for narrowly defined procedural tasks.
- When improving results of human activities already completed.
- When detecting decision patterns or deviations from previous patterns without replacing or altering previous decisions without human review.
- When preparing assessments related to Annex III.
In the case of the second category, if an AI system does not pose a significant risk to individuals' health, safety, or fundamental rights, it is not classified as high-risk. The Act specifies four types of exceptions where AI systems are not considered high-risk:
However, even with these exceptions, if profiling of individuals is involved, the AI system will still be considered high-risk. Providers who believe their AI system falls into the exceptions must document their risk assessment before placing the system on the market or into use and register it in the EU database.
Due to the inherent vagueness in distinguishing high-risk and non-high-risk AI systems, the Act requires the European Commission to provide relevant guidelines and lists of examples by February 2, 2026. The Commission is also authorized to amend the scope of high-risk AI systems as needed. Therefore, relevant entities should keep an eye on the latest EU legislative developments.
- Regulatory Requirements for High-Risk AI Systems
- Risk Management System: High-risk AI systems must establish, implement, record, and maintain a risk management system throughout the system’s lifecycle, retaining records for 10 years after the system’s lifecycle ends. This system should include identifying, analyzing, and assessing risks to individuals' health, safety, or fundamental rights from the system’s intended use and foreseeable misuse, and taking appropriate measures to address known and reasonably foreseeable risks. Testing should be carried out during development and before placing the system on the market or into use, potentially in real-world conditions.
- Data Management: AI systems typically need to train models using data. Training, validation, and testing with datasets must adhere to data management practices suitable for the AI system's intended purpose. Special attention must be given to the purpose of personal data collection, the availability and appropriateness of datasets, and their representativeness and completeness. Bias must be examined to prevent negative impacts on individuals' health, safety, or fundamental rights, or discrimination prohibited by EU law. Providers may process special types of personal data to identify and correct biases, provided they meet EU privacy and data protection requirements, document data processing activities, ensure data protection, and delete such data once bias is corrected or after its retention period expires.
- Technical Documentation: Before placing a high-risk AI system on the market or into use, technical documentation must be prepared according to Annex V of the Act and kept updated. High-risk AI systems should establish logs that automatically generate and cover the entire lifecycle of the system, with a retention period suited to the system's intended purpose, and not less than 6 months.
- System Transparency: Providers must ensure the transparency of high-risk AI systems so deployers understand and use the system effectively. Instructions must be provided with the system, including the provider's identity and contact details, authorized representatives (if any), system features and performance, human oversight measures, required computing and hardware resources, maintenance measures, etc.
- Human Oversight: Adequate human oversight should be implemented for high-risk AI systems to ensure that the AI system's functioning aligns with its intended purpose and does not pose risks to health, safety, or fundamental rights. The level of human oversight must be proportionate to the level of risk, enabling human intervention when necessary and providing appropriate training to human overseers.
- Accuracy, Robustness, and Safety of the System: High-risk AI systems should achieve an appropriate level of accuracy, robustness, and safety.
- Obligations of Providers: In addition to meeting the above requirements, Article 16 of the Act lists several obligations for providers, with particular attention needed for the following:
- Before placing high-risk AI systems on the market or into use, providers must complete a conformity assessment procedure in accordance with Article 43, prepare an EU Declaration of Conformity according to Article 47, retain it for 10 years for review by national authorities, and register it in the EU database
- Providers must indicate their name, registered trademark, and contact address on high-risk AI systems (or their packaging, accompanying documents).
- Providers must affix the CE marking to high-risk AI systems (or their packaging, accompanying documents).
- Providers must establish a quality management system in accordance with Article 17. Article 17 requires systematic documentation of rules, procedures, and instructions, including compliance strategies, data management systems and procedures, record-keeping systems for all relevant documents and information, and other necessary matters. Documents must be retained for 10 years. Given that the record-keeping obligations of Article 17 may be burdensome for micro and small enterprises, the Act allows for appropriate simplifications, and micro and small enterprises should refer to subsequent guidelines issued by the European Commission.
- Authorized Representatives of Providers: Providers without a presence in the EU must appoint an authorized representative located in the EU in writing before placing high-risk AI systems on the EU market. The authorized representative performs designated tasks according to the authorization.
- Obligations of Deployers: The Act has relatively fewer provisions regarding deployers' obligations. According to Article 26, their obligations include:
- Notifying employee representatives and affected employees before deploying high-risk AI systems in the workplace.
- Conducting data protection impact assessments in accordance with GDPR Article 35 and considering the information included in the provider's instructions during the assessment.
- Ensuring the high-risk AI system is used according to the provider's instructions.
- Designating qualified personnel to supervise the high-risk AI system.
- Monitoring the operation of the high-risk AI system and notifying the provider and relevant regulatory authorities if it poses risks to individuals' health, safety, or fundamental rights according to the instructions or if serious incidents are discovered.
- Ensuring that the input data under their control is relevant and sufficiently representative of the high-risk AI system's intended purpose.
- Retaining logs automatically generated by the high-risk AI system for a period appropriate to the system's intended purpose and not less than 6 months.
- A description of how the high-risk AI system is used for its intended purpose.
- The expected duration and frequency of use of each high-risk AI system.
- The categories of individuals and groups that may be affected.
- The risks that the potentially affected individuals and groups may face, considering the information included in the provider’s instructions.
- A description of the human oversight measures.
- Measures taken to address risks, including internal governance and complaint mechanisms. If the data protection impact assessment has already been conducted, the fundamental rights impact assessment should be carried out alongside it.
- Obligations of Importers and Distributors: Importers must check the compliance of high-risk AI systems with the Act’s requirements before placing them on the market, mark the system and its packaging or accompanying documents with their name, registered trademark or trade name, and contact address, and keep copies of certificates, instructions, and EU Declaration of Conformity for 10 years after the system is placed on the market or into use.
- Distributors must check whether high-risk AI systems are affixed with the CE marking, accompanied by instructions and the EU Declaration of Conformity, and whether the provider and importer have fulfilled their obligations before placing the system on the market.
- Transfer of Providers' Obligations and Obligations of Other Participants in the Value Chain: In the following cases, importers, distributors, deployers, or other third parties assume obligations originally borne by the provider:
- When they mark their name or trademark on high-risk AI systems that have already been placed on the market or in use.
- When they make substantial modifications to high-risk AI systems already on the market or in use, and the modified AI system remains high-risk.
- When they transform a non-high-risk AI system that has been placed on the market or in use into a high-risk AI system. In such cases, the original provider is no longer required to assume related obligations but must provide reasonable assistance.
Additionally, Article 27 stipulates that if the deployer is a public authority or a private entity providing public services (excluding entities in critical infrastructure sectors) or if the high-risk AI system is used for assessing individuals' creditworthiness, risk assessment, and pricing for life and health insurance, a fundamental rights impact assessment must be conducted before deploying the system, the results must be communicated to the regulatory authorities, and updates must be maintained after the system is in use.
The fundamental rights impact assessment is somewhat similar to the data protection impact assessment. According to this Article, the assessment should include:
If a high-risk AI system is a safety component of a product regulated by EU laws listed in Annex I-A, the product manufacturer is considered the provider. If the high-risk AI system is marked with its name or trademark, the product manufacturer must fulfill the obligations specified for providers in Article 16 of the Act.
For third parties providing tools, services, components, or programs for AI systems, even though they do not hold the status of a provider, the provider must establish written agreements with them for necessary assistance to fulfill their obligations under the Act.
3. AI Systems with Specific Transparency Risks
To prevent individuals from being manipulated, the EU Artificial Intelligence Act imposes transparency obligations on providers and deployers of specific AI systems. Specifically, the following four scenarios are addressed:
- AI Systems Interacting Directly with Individuals
AI systems that interact directly with individuals must be designed to make it clear to the individuals that they are interacting with an AI system, unless it is evident to a reasonable person given the context of use. This obligation does not apply to AI systems authorized by law for the detection, prevention, investigation, or prosecution of criminal offenses (provided that appropriate safeguards are in place for third-party rights and freedoms), unless the AI system is used for public reporting of criminal offenses.
- AI Systems Generating Synthetic Audio, Images, Video, or Text Content
AI systems that generate synthetic audio, images, video, or text content must ensure that their outputs are marked in a machine-readable way and can be identified as artificially generated or manipulated. This obligation does not apply to AI systems used for text standardization editing or those that do not make substantial changes to input content, nor to AI systems authorized by law for detecting, preventing, investigating, or prosecuting criminal offenses.
- Emotion Recognition or Biometric Classification Systems
Deployers of emotion recognition or biometric classification systems must inform affected individuals about the operation of these systems and handle personal data in accordance with EU privacy and data protection laws, such as the GDPR. This obligation does not apply to biometric classification and emotion recognition systems authorized by law for detecting, preventing, or investigating criminal offenses (provided that appropriate safeguards are in place and no EU law is violated).
- AI Systems Generating or Manipulating Deepfake Content
Deployers of AI systems that generate or manipulate deepfake images, audio, or video content must disclose that the content is artificially generated or manipulated. If the content is part of an artistic or similar work, disclosure should be made in a suitable manner without hindering its presentation and enjoyment. This obligation does not apply if the AI-generated content is subject to human review or editing, and the publisher of the content is responsible for editing, nor to AI systems authorized for detecting, preventing, investigating, or prosecuting criminal offenses.
4. Minimal Risk AI Systems
AI systems that do not fall into the above categories are considered to have minimal risk and can be freely used on the EU market under the current legal framework. However, these systems must still comply with relevant privacy and data protection regulations as well as consumer protection laws.
It is also important to note that the EU Artificial Intelligence Act mandates AI literacy for all applicable AI systems, meaning that providers and deployers must ensure their employees and other representatives handling and using AI systems are adequately trained in AI literacy.
III. General AI Models
The EU Artificial Intelligence Act defines general AI models as AI models trained on large-scale, self-supervised learning, with significant generalization capabilities, capable of performing a wide range of tasks, and integrable into various downstream systems and applications. However, AI models used for research and prototyping and not yet on the market are not considered general AI models under the Act.
- General Obligations of General AI Model Providers
Article 53 of the Act sets out general obligations for providers of general AI models, including:
(1) Preparing and continuously updating technical documentation for the model, including training and testing processes and evaluation results, to provide upon request to the EU AI Office and national competent authorities;
(2) Providing relevant information and documentation to AI system providers intending to integrate the model into their AI systems and updating this information continuously;
(3) Preparing and publishing a sufficiently detailed summary of the model's training and usage content using a template provided by the EU AI Office;
(4) Implementing and enforcing measures to comply with EU copyright laws.
General AI model providers without a presence in the EU must appoint an authorized representative located in the EU in writing before placing the model on the EU market. The authorized representative performs designated tasks according to the authorization.
- Additional Obligations for High-Risk General AI Model Providers
If a general AI model has high impact capabilities or is recognized by the European Commission as having high impact capabilities, it is considered a high-risk general AI model.
Article 55 specifies additional obligations for providers of high-risk general AI models beyond the general obligations, including:
(1) Using standardized protocols and tools reflecting the latest technological standards to assess the model, including conducting and recording adversarial testing to identify and mitigate systemic risks;
(2) Assessing and mitigating systemic risks at the EU level, including risks associated with development, market placement, and use;
(3) Tracking, recording, and promptly reporting serious incidents and possible remedial actions to the EU AI Office, and reporting to national competent authorities as appropriate;
(4) Ensuring appropriate cybersecurity protection for the model and its physical infrastructure.
V. AI Regulatory Sandbox
To encourage technological innovation, the EU Artificial Intelligence Act introduces a regulatory sandbox designed to provide a controlled environment for the development, training, testing, and validation of AI systems before they are placed on the market or put into use, including testing in real-world environments.
The Act requires each member state to establish at least one AI regulatory sandbox at the national level. Within the sandbox, providers can avoid fines for violations of the Act's provisions as long as they comply with the sandbox's specific plans and participation conditions and follow national authorities' guidance. The Act also provides benefits for SMEs, such as free use of the sandbox and priority access.
It is worth noting that data protection authorities may also be involved in sandbox regulation, so participants must still adhere to data protection obligations. Additionally, Article 59 of the Act stipulates that personal data collected for other legitimate purposes may be used in the sandbox for developing, training, and testing specific AI systems under certain conditions, including:
(a) The AI system is developed for significant public interest, including public safety and health, environmental protection, energy sustainability, transportation system safety, critical infrastructure security, cybersecurity, and public administration and service efficiency;
(b) Data processing is necessary to meet the requirements of Chapter 3, Section 2 of the Act (requirements for high-risk AI systems) and cannot be effectively met through anonymization;
(c) Effective monitoring and response mechanisms are in place to identify and mitigate high risks to data subjects' rights and freedoms in sandbox experiments;
(d) Data processing occurs in a controlled, functionally independent, isolated, and protected environment, with access restricted to authorized personnel;
(e) Data may only be further shared in accordance with EU data protection laws, and personal data created within the sandbox cannot be shared outside the sandbox;
(f) Data processing in the sandbox does not impact measures or decisions concerning data subjects or their exercise of rights under EU data protection laws;
(g) Personal data processed in the sandbox is protected by appropriate measures and deleted immediately upon leaving the sandbox or after the data retention period expires;
(h) Logs of personal data processing in the sandbox should be kept during participation;
(i) Detailed descriptions of the AI system’s training, testing, and validation processes, and test results should be retained as part of the technical documentation;
(j) A brief summary of the AI projects, objectives, and expected results developed in the sandbox should be published on the competent authority’s website.
However, the specific operation of the AI regulatory sandbox is not yet clear, and further detailed arrangements from the European Commission and member states are awaited.
VI. Penalties
According to the Act:
- Violations of the prohibitions related to certain AI systems will incur fines of up to €35 million or 7% of the previous year's global turnover, whichever is higher.
- Violations of other compliance requirements under the Act will incur fines of up to €15 million or 3% of the previous year's global turnover, whichever is higher.
- Providing incorrect, incomplete, or misleading information to regulatory authorities will incur fines of up to €7.5 million or 1% of the previous year's global turnover, whichever is higher.
VII. Effectiveness and Implementation of the Act
The EU Artificial Intelligence Act will come into effect on August 2, 2024, and will be implemented starting August 2, 2026. However, exceptions include:
- Chapter 1 (General Provisions) and Chapter 2 (Prohibited AI Practices) of the Act will be implemented starting February 2, 2025.
- Chapter 3, Section 4 (Notifying Authorities and Notified Bodies), Chapter 5 (General AI Models), Chapter 7 (Governance), Chapter 12 (Fines), and Article 78 (Confidentiality) will be implemented starting August 2, 2025, except for Article 101 (Fines for Providers of General AI Models).
- Article 6(1) and related provisions (regarding the classification of specific high-risk AI systems and corresponding obligations) will be implemented starting August 2, 2027.
Frequently Asked Questions
Q: After the EU Artificial Intelligence Act is enacted, do AI activities still need to comply with GDPR?
A: Yes. The Artificial Intelligence Act does not conflict with GDPR; AI activities must comply with both.
Q: Who is the "competent authority" mentioned in the EU Artificial Intelligence Act?
A: Article 28 of the Act requires member states to establish an authority responsible for the Act's matters, but a specific authority has not yet been designated.
Q: Does the EU Artificial Intelligence Act ban AI systems used for facial recognition?
A: The Act generally bans real-time, remote facial recognition in public spaces for law enforcement purposes, but does not ban facial recognition AI systems used for non-law enforcement purposes. However, facial recognition AI systems are
considered high-risk and must comply with the Act’s requirements for risk management systems, data management, technical documentation, and other obligations.
Q: Are there any requirements related to data cross-border transfer in the EU Artificial Intelligence Act?
A: The Act does not impose specific requirements on data cross-border transfers; compliance with GDPR remains the basis for cross-border data transfer.
Regulatory Analysis
The EU Artificial Intelligence Act adopts a risk-based regulatory approach, setting different obligations for various participants in the AI value chain and authorizing the European Commission and member states to make modifications or detailed regulations to achieve a dynamic balance between regulation and innovation.
Given the broad applicability and stringent penalties of the Act, enterprises with plans to operate in the EU must pay close attention to the Act and strictly comply with its provisions.
When considering compliance with the Act, companies first need to clarify their role in the AI value chain, which requires analyzing their business models. For example, companies developing AI systems for the EU market may be considered providers of AI systems; companies in sectors like automotive, e-commerce, finance, or healthcare using self-developed AI systems may be considered providers, while those using AI systems developed by others may be considered deployers or providers depending on the context. Companies also need to assess the risk level of their AI systems to apply the appropriate regulatory requirements. Although most AI systems fall into the minimal risk category, companies in sectors like education, employment, finance, insurance, and healthcare should pay special attention to whether their AI systems meet the criteria for high-risk AI systems as defined by the Act.
The Act requires that AI system providers not place prohibited AI systems on the market and, if deploying high-risk AI systems, carefully address obligations such as establishing and maintaining risk management systems, data management, preparing technical documentation, conducting conformity assessments, and creating EU conformity declarations. Deployers must closely monitor and control the use of high-risk AI systems to minimize risks to individuals’ health, safety, or fundamental rights. Importers and distributors need to verify that high-risk AI systems comply with the Act’s requirements. Entities other than providers may also assume provider obligations in certain circumstances. The Act provides comprehensive oversight of AI systems from early development through market preparation, use, and even end-of-life.
It is important to note that the Act does not conflict with GDPR, and AI activities must still consider GDPR’s privacy compliance requirements. The Act also repeatedly emphasizes compliance with GDPR for personal data processing and conducting data protection impact assessments. Kaamel, as a specialist in privacy compliance, has successfully assisted various enterprises with data protection impact assessments and other privacy compliance tasks, helping companies effectively address regulatory requirements and user needs while reducing privacy risks and compliance issues.
Although there is some time before the Act's implementation, the detailed nature of its provisions may require substantial effort and resources from companies to meet compliance requirements. Companies are advised to take timely action, develop appropriate compliance plans, and prepare adequately before the Act comes into effect. Additionally, companies should closely monitor EU legislative and enforcement developments, as the European Commission and member states will issue more detailed regulations. Kaamel will continue to track the Act's dynamics and provide you with the latest legal information.