The EU AI Act (Specific Law in Europe) a specific legal framework in Europe that applies governance
through a risk-based regulatory system.
Scope: Applies to AI providers, users, and high-risk AI systems.
The EU Artificial Intelligence Act is the world’s first comprehensive legal framework for AI, adopted in 2024.
It follows a risk-based approach:
- Unacceptable Risk AI → Banned (e.g., social scoring by governments, manipulative AI).
- High-Risk AI → Strict rules (e.g., AI in healthcare, employment, education, policing). Requires risk assessment, transparency, human oversight, and safety checks.
- Limited Risk AI → Transparency obligations (e.g., chatbots must disclose they’re AI).
- Minimal Risk AI → Mostly unregulated (e.g., spam filters, video game AI).
Unacceptable risk
- Social scoring for public and private purposes
- Exploitation of vulnerabilities of persons and the use of subliminal techniques.
- Real-time remote biometric identification in publicly accessible spaces by law enforcement,
- subject to narrow exceptions
- Biometric categorization of natural persons based on biometric data to deduce or infer their race,
political opinions, trade union membership, religious or philosophical beliefs or sexual orientation.
Filtering of datasets based on biometric data in the area of law enforcement will still be possible. - Individual predictive policing.
- Emotion recognition in the workplace and educational institutions, unless for medical or safety
reasons (i.e. monitoring the fatigue levels of a pilot). - Untargeted scraping of the internet or CCTV for facial images to build or expand databases.
High risk
- Essential private and public services (e.g. financial institutions using credit scoring models that
could deny citizens the opportunity to obtain a loan). - Employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures).
- Critical infrastructures (e.g. transport) that could put the life and health of citizens at risk.
- Educational or vocational training that may determine access to education and the professional
course of someone’s life (e.g. the scoring of exams). - Safety components of products (e.g. AI applications in robot-assisted surgery).
- Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the
reliability of evidence). - Systems intended to be used to make or substantially influence decisions on the eligibility of
natural persons for health and life insurance. - Migration, asylum and border control management (e.g. verification of authenticity of trave
documents). - Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts).
Limited risk
Compliance obligations are lighter, focusing on transparency. Users must be informed when dealing with an AI system unless the outputs are obviously generated by AI at face value.
Examples of uses are as follows:
- The user should be aware that they are interacting with a chatbot.
- Use of deep-fakes that can easily be determined as a deep-fake.
Minimal risk
AI systems not falling into the three categories mentioned above are not subject to compliance under the EU AI Act. The primary focus for technology providers will be the high-risk and limited risk categories. All other AI systems can be developed and used subject to existing legislation without any additional legal obligations. An example of a minimal risk is:
- Use of AI within video games.
- Other risks that must be considered include:
- Specific transparency risk: Specific transparency requirements are imposed for certain AI systems— for example, where there is a clear risk of manipulation (such as with the use of chatbots). Users should be aware that they are interacting with a chatbot.
- Systemic risks: Systemic risks that could arise from general-purpose AI models, including large GenAI models. These can be used for a variety of tasks and are becoming the basis for many AI systems in the EU.
Article 5 – Prohibited AI Practices (EU AI Act)
- Manipulative or Deceptive Techniques: AI systems that use subliminal or manipulative methods to distort behavior and prevent informed decisions, causing significant harm.
- Exploitation of Vulnerabilities: AI systems that exploit weaknesses (age, disability, social or economic
- situation) to manipulate behavior, causing significant harm.
- Social Scoring: AI systems that classify or evaluate people based on social behavior or personal traits, leading to unjustified unfavorable treatment.
- Criminal Risk Profiling: AI systems predicting the likelihood of someone committing a crime solely based on profiling or personality traits, without factual evidence.
- Untargeted Facial Recognition Data Scraping: AI systems creating facial recognition databases by scraping images from the internet or CCTV without consent.
- Emotion Recognition in Sensitive Contexts: AI systems detecting emotions in workplaces or schools, except for medical or safety purposes.
- Biometric Categorization: AI systems categorizing people based on biometric data to infer sensitive traits (race, political views, sexual orientation), except in certain law enforcement contexts.
- Real-Time Remote Biometric Identification by Law Enforcement: AI systems used in public spaces for real-time biometric identification, unless strictly necessary (e.g., finding missing persons or preventing threats).
Article 6 – High-Risk AI Systems
AI systems deemed high-risk are subject to strict requirements.
Annex I: Lists high-risk AI use cases (critical infrastructure, education, employment, law enforcement).
Annex III: Sectors requiring conformity assessment before market deployment.
Requirements: risk management, data governance, transparency, human oversight, cybersecurity
Timeline
August 2024: Entry into Force: AI Act enters into force; transitional periods begin.
February 2025: Prohibited Practices Apply: Bans on certain AI practices and the requirement for AI literacy in organisations come into effect.
August 2025: General-Purpose AI (GPAI) Obligations: Transparency and documentation requirements for GPAI models like ChatGPT become mandatory.
August 2026: High-Risk AI Systems Compliance: Full compliance obligations for high-risk AI systems,
including documentation, risk assessments, and oversight.
2 August 2027: Sector-Specific Rules Apply: Additional requirements for AI systems used in specific sectors,
such as healthcare and transport, come into force.
2 August 2028: Evaluation and Reporting: European Commission evaluates the AI Office, energy-efficient AI
standards, and voluntary codes of conduct.

