Welcome to ai-act-law.eu. Here you will find the PDF of Artificial Intelligence Act (AI Act) neatly arranged. The final text of the AI Act is available both in English and German. The Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on Artificial Intelligence will apply from 2 August 2026 with some exceptions (cf. Art. 113 AI Act).
Quick Access
Table of contents
What is the AI Act?
The AI Act is an EU regulation governing artificial intelligence. The purpose of the AI Act is to promote the uptake of human-centric AI in Europe while ensuring a high level of protection of health, safety and the fundamental rights of citizens (Article 1 AI Act). To this end, the law defines AI practices that are prohibited from the outset and such that are likely to involve a “high risk” and therefore require special diligence.
When does the AI Act enter into force?
The AI Act was published in the Official Journal of the European Union on 12 July 2024. It enters into force 20 days after its publication on 1 August 2024. According to Article 113 of the AI Act the provisions for prohibited AI practices will apply 6 months after entry into force, while most other provisions apply after either 1 or 2 years. Requirements and obligations for high-risk AI systems apply after 3 years.
What types of AI are covered in the AI Act?
The AI Act distinguishes between 3 types of AI based on the risk they pose to the safety and fundamental rights of individuals. These 3 types are:
- Prohibited AI systems (Article 5):
Some AI systems are incompatible with the values of the European Union and are therefore prohibited. These are, for example, those that predict future criminality of individuals. - High-risk AI systems (Article 6):
High-risk AI systems must meet strict requirements before their use is permitted. This includes, for example, AI in surgical robots. - AI systems with minimal or no risk:
The AI Act does not regulate AI systems with minimal risk. This category includes, for example, spam filters or AI-enabled video games.
Who is affected by the AI Act?
The AI Act applies to private organizations and public authorities and, according to Article 2 AI Act, affects five groups:
- providers,
- deployers,
- importers and distributors,
- product manufacturers and
- affected persons.
The AI Act expressly does not apply to private individuals if they use AI systems for purely private (as opposed to professional) purposes. Additionally, there are further exemptions for systems used for the purpose of scientific research and development, the military, defense, and national security.
Where does the AI Act apply?
The AI Act applies in the European Union. Whether and when the AI Act will also apply to the EEA countries is still unclear, as they must agree to the adoption of EU regulations. In addition, the AI Act also applies in non-EU countries for providers or deployers of AI systems that place them on the market in the EU or if the output produced by the AI system is used in the EU.
What is high-risk AI?
According to Article 6 of the AI Act, high-risk AI is an AI system whose actual use poses a significant risk of harm to health, safety or a detrimental effect on the fundamental rights of individuals.
On the one hand, this is the case where they are used in products or their safety components that are already strictly regulated under EU law, such as safety components for lifts or medical devices. On the other hand, this is the case if they are used in a certain way in areas defined by Annex III of the regulation. These include biometrics, critical infrastructure, education and vocational training, employment, workers management, access to and enjoyment of essential services and benefits, law enforcement, migration, asylum and border control management, administration of justice and democratic processes. The EU Commission can adapt this list in order to react to new circumstances. Furthermore, it must provide a comprehensive list of practical examples of high-risk and non-high risk use cases of AI systems no later than 18 months after the AI Regulation has entered into force.
What is the definition of AI and what does it include?
AI is often understood as a combination of different technical approaches and methods aimed at enabling machines to mimic human cognitive abilities such as logical reasoning, learning, planning and creativity. A universally accepted definition has not yet been established and remains subject to controversial discussions.
The AI law also provides a definition. It defines AI as all machine-based systems that are designed to operate with varying degrees of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments (Article 3 AI Act).
This definition of AI is very broad, as most of the criteria also apply to traditional software programs. The decisive factor in determining whether a system is considered AI is therefore usually whether it is designed for autonomous operation.
Why is AI dangerous? Is AI a threat?
AI can pose a threat to individuals by significantly influencing their behavior to their detriment. It can also be the cause of discrimination or other unjustified infringements on their fundamental rights. This can result in harm through significant adverse effects on their physical and mental health or financial interests. In Article 5 AI Act, the European legislator therefore describes practices that pose an unacceptable risk to individuals and are therefore prohibited in the EU. The list includes the following uses of AI systems:
- Cognitive behavioral manipulation
- Exploitation of a person’s situation, weakness or vulnerability
- Social credit systems
- Predictive policing in relation to individual persons
- Scraping of facial images to create a facial recognition database
- Emotion recognition in the workplace or educational institutions
- Biometric categorization based on sensitive data as defined in Article 9 GDPR
- Use of real-time remote biometric identification systems (facial recognition) in public for law enforcement purposes, except where strictly necessary for the offenses specified in the law.
The EU Commission will publish guidelines in which the prohibited practices will be defined in more detail. Additionally, it has the task of reviewing the list. The AI Office is involved in this process.