Artificial Intelligence Act AI Act

Welcome to ai-act-law.eu. Here you will find the PDF of Artificial Intelligence Act (AI Act) neatly arranged. The final text of the AI Act is available both in English and German. The Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on Artificial Intelligence will apply from 2 August 2026 with some exceptions (cf. Art. 113 AI Act).

Quick Access

Table of contents

Chapter 1General provisions
Art. 1 AI ActSubject matter
Art. 2 AI ActScope
Art. 3 AI ActDefinitions
Art. 4 AI ActAI literacy
Chapter 2Prohibited AI practices
Art. 5 AI ActProhibited AI Practices
Chapter 3High-risk AI systems
Section 1Classification of AI systems as high-risk
Art. 6 AI ActClassification rules for high-risk AI systems
Art. 7 AI ActAmendments to Annex III
Section 2Requirements for high-risk AI systems
Art. 8 AI ActCompliance with the requirements
Art. 9 AI ActRisk management system
Art. 10 AI ActData and data governance
Art. 11 AI ActTechnical documentation
Art. 12 AI ActRecord-keeping
Art. 13 AI ActTransparency and provision of information to deployers
Art. 14 AI ActHuman oversight
Art. 15 AI ActAccuracy, robustness and cybersecurity
Section 3Obligations of providers and deployers of high-risk AI systems and other parties
Art. 16 AI ActObligations of providers of high-risk AI systems
Art. 17 AI ActQuality management system
Art. 18 AI ActDocumentation keeping
Art. 19 AI ActAutomatically generated logs
Art. 20 AI ActCorrective actions and duty of information
Art. 21 AI ActCooperation with competent authorities
Art. 22 AI ActAuthorised representatives of providers of high-risk AI systems
Art. 23 AI ActObligations of importers
Art. 24 AI ActObligations of distributors
Art. 25 AI ActResponsibilities along the AI value chain
Art. 26 AI ActObligations of deployers of high-risk AI systems
Art. 27 AI ActFundamental rights impact assessment for high-risk AI systems
Section 4Notifying authorities and notified bodies
Art. 28 AI ActNotifying authorities
Art. 29 AI ActApplication of a conformity assessment body for notification
Art. 30 AI ActNotification procedure
Art. 31 AI ActRequirements relating to notified bodies
Art. 32 AI ActPresumption of conformity with requirements relating to notified bodies
Art. 33 AI ActSubsidiaries of notified bodies and subcontracting
Art. 34 AI ActOperational obligations of notified bodies
Art. 35 AI ActIdentification numbers and lists of notified bodies
Art. 36 AI ActChanges to notifications
Art. 37 AI ActChallenge to the competence of notified bodies
Art. 38 AI ActCoordination of notified bodies
Art. 39 AI ActConformity assessment bodies of third countries
Section 5Standards, conformity assessment, certificates, registration
Art. 40 AI ActHarmonised standards and standardisation deliverables
Art. 41 AI ActCommon specifications
Art. 42 AI ActPresumption of conformity with certain requirements
Art. 43 AI ActConformity assessment
Art. 44 AI ActCertificates
Art. 45 AI ActInformation obligations of notified bodies
Art. 46 AI ActDerogation from conformity assessment procedure
Art. 47 AI ActEU declaration of conformity
Art. 48 AI ActCE marking
Art. 49 AI ActRegistration
Chapter 4Transparency obligations for providers and deployers of certain AI systems
Art. 50 AI ActTransparency obligations for providers and deployers of certain AI systems
Chapter 5General-purpose AI models
Section 1Classification rules
Art. 51 AI ActClassification of general-purpose AI models as general-purpose AI models with systemic risk
Art. 52 AI ActProcedure
Section 2Obligations for providers of general-purpose AI models
Art. 53 AI ActObligations for providers of general-purpose AI models
Art. 54 AI ActAuthorised representatives of providers of general-purpose AI models
Section 3Obligations of providers of general-purpose AI models with systemic risk
Art. 55 AI ActObligations of providers of general-purpose AI models with systemic risk
Section 4Codes of practice
Art. 56 AI ActCodes of practice
Chapter 6Measures in support of innovation
Art. 57 AI ActAI regulatory sandboxes
Art. 58 AI ActDetailed arrangements for, and functioning of, AI regulatory sandboxes
Art. 59 AI ActFurther processing of personal data for developing certain AI systems in the public interest in the AI regulatory sandbox
Art. 60 AI ActTesting of high-risk AI systems in real world conditions outside AI regulatory sandboxes
Art. 61 AI ActInformed consent to participate in testing in real world conditions outside AI regulatory sandboxes
Art. 62 AI ActMeasures for providers and deployers, in particular SMEs, including start-ups
Art. 63 AI ActDerogations for specific operators
Chapter 7Governance
Section 1Governance at Union level
Art. 64 AI ActAI Office
Art. 65 AI ActEstablishment and structure of the European Artificial Intelligence Board
Art. 66 AI ActTasks of the Board
Art. 67 AI ActAdvisory forum
Art. 68 AI ActScientific panel of independent experts
Art. 69 AI ActAccess to the pool of experts by the Member States
Section 2National competent authorities
Art. 70 AI ActDesignation of national competent authorities and single points of contact
Chapter 8EU database for high-risk AI systems
Art. 71 AI ActEU database for high-risk AI systems listed in Annex III
Chapter 9Post-market monitoring, information sharing and market surveillance
Section 1Post-market monitoring
Art. 72 AI ActPost-market monitoring by providers and post-market monitoring plan for high-risk AI systems
Section 2Sharing of information on serious incidents
Art. 73 AI ActReporting of serious incidents
Section 3Enforcement
Art. 74 AI ActMarket surveillance and control of AI systems in the Union market
Art. 75 AI ActMutual assistance, market surveillance and control of general-purpose AI systems
Art. 76 AI ActSupervision of testing in real world conditions by market surveillance authorities
Art. 77 AI ActPowers of authorities protecting fundamental rights
Art. 78 AI ActConfidentiality
Art. 79 AI ActProcedure at national level for dealing with AI systems presenting a risk
Art. 80 AI ActProcedure for dealing with AI systems classified by the provider as non-high-risk in application of Annex III
Art. 81 AI ActUnion safeguard procedure
Art. 82 AI ActCompliant AI systems which present a risk
Art. 83 AI ActFormal non-compliance
Art. 84 AI ActUnion AI testing support structures
Section 4Remedies
Art. 85 AI ActRight to lodge a complaint with a market surveillance authority
Art. 86 AI ActRight to explanation of individual decision-making
Art. 87 AI ActReporting of infringements and protection of reporting persons
Section 5Supervision, investigation, enforcement and monitoring in respect of providers of general-purpose AI models
Art. 88 AI ActEnforcement of the obligations of providers of general-purpose AI models
Art. 89 AI ActMonitoring actions
Art. 90 AI ActAlerts of systemic risks by the scientific panel
Art. 91 AI ActPower to request documentation and information
Art. 92 AI ActPower to conduct evaluations
Art. 93 AI ActPower to request measures
Art. 94 AI ActProcedural rights of economic operators of the general-purpose AI model
Chapter 10Codes of conduct and guidelines
Art. 95 AI ActCodes of conduct for voluntary application of specific requirements
Art. 96 AI ActGuidelines from the Commission on the implementation of this Regulation
Chapter 11Delegation of power and committee procedure
Art. 97 AI ActExercise of the delegation
Art. 98 AI ActCommittee procedure
Chapter 12Penalties
Art. 99 AI ActPenalties
Art. 100 AI ActAdministrative fines on Union institutions, bodies, offices and agencies
Art. 101 AI ActFines for providers of general-purpose AI models
Chapter 13Final provisions
Art. 102 AI ActAmendment to Regulation (EC) No 300/2008
Art. 103 AI ActAmendment to Regulation (EU) No 167/2013
Art. 104 AI ActAmendment to Regulation (EU) No 168/2013
Art. 105 AI ActAmendment to Directive 2014/90/EU
Art. 106 AI ActAmendment to Directive (EU) 2016/797
Art. 107 AI ActAmendment to Regulation (EU) 2018/858
Art. 108 AI ActAmendments to Regulation (EU) 2018/1139
Art. 109 AI ActAmendment to Regulation (EU) 2019/2144
Art. 110 AI ActAmendment to Directive (EU) 2020/1828
Art. 111 AI ActAI systems already placed on the market or put into service and general-purpose AI models already placed on the marked
Art. 112 AI ActEvaluation and review
Art. 113 AI ActEntry into force and application

What is the AI Act?

The AI Act is an EU regulation governing artificial intelligence. The purpose of the AI Act is to promote the uptake of human-centric AI in Europe while ensuring a high level of protection of health, safety and the fundamental rights of citizens (Article 1 AI Act). To this end, the law defines AI practices that are prohibited from the outset and such that are likely to involve a “high risk” and therefore require special diligence.

When does the AI Act enter into force?

The AI Act was published in the Official Journal of the European Union on 12 July 2024. It enters into force 20 days after its publication on 1 August 2024. According to Article 113 of the AI Act the provisions for prohibited AI practices will apply 6 months after entry into force, while most other provisions apply after either 1 or 2 years. Requirements and obligations for high-risk AI systems apply after 3 years.

What types of AI are covered in the AI Act?

The AI Act distinguishes between 3 types of AI based on the risk they pose to the safety and fundamental rights of individuals. These 3 types are:

  • Prohibited AI systems (Article 5):
    Some AI systems are incompatible with the values of the European Union and are therefore prohibited. These are, for example, those that predict future criminality of individuals.
  • High-risk AI systems (Article 6):
    High-risk AI systems must meet strict requirements before their use is permitted. This includes, for example, AI in surgical robots.
  • AI systems with minimal or no risk:
    The AI Act does not regulate AI systems with minimal risk. This category includes, for example, spam filters or AI-enabled video games.

Who is affected by the AI Act?

The AI Act applies to private organizations and public authorities and, according to Article 2 AI Act, affects five groups:

  • providers,
  • deployers,
  • importers and distributors,
  • product manufacturers and
  • affected persons.

The AI Act expressly does not apply to private individuals if they use AI systems for purely private (as opposed to professional) purposes. Additionally, there are further exemptions for systems used for the purpose of scientific research and development, the military, defense, and national security.

Where does the AI Act apply?

The AI Act applies in the European Union. Whether and when the AI Act will also apply to the EEA countries is still unclear, as they must agree to the adoption of EU regulations. In addition, the AI Act also applies in non-EU countries for providers or deployers of AI systems that place them on the market in the EU or if the output produced by the AI system is used in the EU.

What is high-risk AI?

According to Article 6 of the AI Act, high-risk AI is an AI system whose actual use poses a significant risk of harm to health, safety or a detrimental effect on the fundamental rights of individuals.

On the one hand, this is the case where they are used in products or their safety components that are already strictly regulated under EU law, such as safety components for lifts or medical devices. On the other hand, this is the case if they are used in a certain way in areas defined by Annex III of the regulation. These include biometrics, critical infrastructure, education and vocational training, employment, workers management, access to and enjoyment of essential services and benefits, law enforcement, migration, asylum and border control management, administration of justice and democratic processes. The EU Commission can adapt this list in order to react to new circumstances. Furthermore, it must provide a comprehensive list of practical examples of high-risk and non-high risk use cases of AI systems no later than 18 months after the AI Regulation has entered into force.

What is the definition of AI and what does it include?

AI is often understood as a combination of different technical approaches and methods aimed at enabling machines to mimic human cognitive abilities such as logical reasoning, learning, planning and creativity. A universally accepted definition has not yet been established and remains subject to controversial discussions.

The AI law also provides a definition. It defines AI as all machine-based systems that are designed to operate with varying degrees of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments (Article 3 AI Act).

This definition of AI is very broad, as most of the criteria also apply to traditional software programs. The decisive factor in determining whether a system is considered AI is therefore usually whether it is designed for autonomous operation.

Why is AI dangerous? Is AI a threat?

AI can pose a threat to individuals by significantly influencing their behavior to their detriment. It can also be the cause of discrimination or other unjustified infringements on their fundamental rights. This can result in harm through significant adverse effects on their physical and mental health or financial interests. In Article 5 AI Act, the European legislator therefore describes practices that pose an unacceptable risk to individuals and are therefore prohibited in the EU. The list includes the following uses of AI systems:

  • Cognitive behavioral manipulation
  • Exploitation of a person’s situation, weakness or vulnerability
  • Social credit systems
  • Predictive policing in relation to individual persons
  • Scraping of facial images to create a facial recognition database
  • Emotion recognition in the workplace or educational institutions
  • Biometric categorization based on sensitive data as defined in Article 9 GDPR
  • Use of real-time remote biometric identification systems (facial recognition) in public for law enforcement purposes, except where strictly necessary for the offenses specified in the law.

The EU Commission will publish guidelines in which the prohibited practices will be defined in more detail. Additionally, it has the task of reviewing the list. The AI Office is involved in this process.