EU AI Act Risk Classification Framework
The EU AI Act Risk Classification Framework is a cornerstone of the European Union's regulatory approach to artificial intelligence, establishing a tiered system that categorizes AI systems based on the level of risk they pose to health, safety, and fundamental rights. **1. Unacceptable Risk (Bann… The EU AI Act Risk Classification Framework is a cornerstone of the European Union's regulatory approach to artificial intelligence, establishing a tiered system that categorizes AI systems based on the level of risk they pose to health, safety, and fundamental rights. **1. Unacceptable Risk (Banned):** AI systems deemed to pose a clear threat to people's safety, livelihoods, or rights are prohibited entirely. Examples include social scoring systems by governments, real-time remote biometric identification in public spaces (with limited exceptions), manipulative AI that exploits vulnerabilities of specific groups, and systems that use subliminal techniques to distort behavior. **2. High Risk:** These AI systems are permitted but subject to strict regulatory requirements before market placement. They include AI used in critical infrastructure, education, employment, essential services, law enforcement, migration management, and administration of justice. High-risk systems must comply with requirements including risk management systems, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity measures. Conformity assessments and registration in an EU database are mandatory. **3. Limited Risk:** These systems carry specific transparency obligations. Users must be informed they are interacting with AI. This category includes chatbots, deepfake generators, and emotion recognition systems. The key requirement is ensuring people are aware AI is being used so they can make informed decisions. **4. Minimal or No Risk:** The vast majority of AI systems fall here, such as AI-enabled video games, spam filters, and inventory management systems. These are largely unregulated under the Act, though voluntary codes of conduct are encouraged. Additionally, the Act introduces specific provisions for **General-Purpose AI (GPAI) models**, requiring transparency obligations and additional requirements for models posing systemic risks. This risk-based framework enables proportionate regulation—imposing stricter controls where risks are greatest while fostering innovation where risks are minimal. Organizations must assess where their AI systems fall within this classification to ensure compliance and appropriate governance measures.
EU AI Act Risk Classification Framework: A Comprehensive Guide
Why Is the EU AI Act Risk Classification Framework Important?
The EU AI Act represents the world's first comprehensive legal framework specifically designed to regulate artificial intelligence. At its core lies the risk-based classification system, which determines how AI systems are regulated based on the level of risk they pose to health, safety, and fundamental rights. Understanding this framework is essential for AI governance professionals because:
• It sets a global precedent that other jurisdictions are likely to follow or reference.
• It directly impacts organizations developing, deploying, or using AI systems within the EU or serving EU citizens.
• It establishes differentiated obligations, meaning non-compliance at higher risk levels carries severe penalties (up to €35 million or 7% of global annual turnover).
• It is a cornerstone topic in the AIGP (AI Governance Professional) certification exam.
What Is the EU AI Act Risk Classification Framework?
The EU AI Act classifies AI systems into four tiers of risk, each carrying different regulatory obligations. The philosophy is simple: the higher the risk, the stricter the requirements. This proportionate approach ensures innovation is not stifled for low-risk applications while providing robust protections where AI poses significant dangers.
The four risk categories are:
1. Unacceptable Risk (Prohibited AI Practices)
These AI systems are banned outright because they are considered a clear threat to fundamental rights and safety. Examples include:
• Social scoring by governments — AI systems that evaluate or classify individuals based on social behavior or personality characteristics, leading to detrimental treatment unrelated to the context in which data was generated.
• Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions for specific serious crimes, missing children, or imminent terrorist threats).
• Manipulation techniques — AI systems that deploy subliminal, manipulative, or deceptive techniques to materially distort behavior, causing significant harm.
• Exploitation of vulnerabilities — AI that exploits vulnerabilities of specific groups due to age, disability, or socio-economic circumstances.
• Biometric categorization systems that categorize individuals based on sensitive attributes (e.g., race, political opinions, sexual orientation) — with limited exceptions.
• Untargeted scraping of facial images from the internet or CCTV footage to build facial recognition databases.
• Emotion recognition in workplaces and educational institutions (with limited exceptions for safety or medical purposes).
• Predictive policing based solely on profiling or personality traits (without objective, verifiable facts).
2. High Risk
These AI systems are permitted but heavily regulated. They are subject to strict obligations before being placed on the market and throughout their lifecycle. High-risk AI systems fall into two main categories:
Category A: AI systems used as safety components of products (or that are themselves products) covered by EU harmonization legislation listed in Annex I — such as machinery, toys, medical devices, vehicles, aviation systems, elevators, and equipment used in potentially explosive atmospheres. These require third-party conformity assessment.
Category B: AI systems in specific areas listed in Annex III, including:
• Biometric identification and categorization of natural persons (remote biometric identification, excluding those that are prohibited).
• Management and operation of critical infrastructure (e.g., road traffic, water, gas, heating, electricity supply, digital infrastructure).
• Education and vocational training (e.g., determining access to education, evaluating learning outcomes, assessing appropriate level of education, monitoring prohibited behavior during exams).
• Employment, worker management, and access to self-employment (e.g., recruitment, CV screening, promotion decisions, contract termination, task allocation based on behavior or traits, monitoring and evaluation).
• Access to and enjoyment of essential private and public services and benefits (e.g., creditworthiness assessment, risk assessment for life and health insurance, evaluation of eligibility for public benefits, emergency service dispatch).
• Law enforcement (e.g., risk assessment for potential victims, polygraphs, evidence reliability evaluation, crime prediction relating to individuals, profiling during criminal investigations).
• Migration, asylum, and border control management (e.g., risk assessment for irregular migration, examination of visa and asylum applications, polygraphs).
• Administration of justice and democratic processes (e.g., AI used to assist judicial authorities in researching and interpreting facts and law, AI used to influence election outcomes or voting behavior — excluding organizational tools).
Requirements for high-risk AI systems include:
• Establishment of a risk management system (ongoing, iterative process throughout the AI system lifecycle).
• Data governance — ensuring training, validation, and testing data sets are relevant, representative, free of errors, and complete.
• Technical documentation prepared before the system is placed on the market.
• Record-keeping — automatic logging of events (logs) to ensure traceability.
• Transparency and provision of information to deployers (users).
• Human oversight — design that enables effective oversight by natural persons.
• Accuracy, robustness, and cybersecurity — appropriate levels throughout the lifecycle.
• Quality management system — covering all of the above in a systematic manner.
• Conformity assessment — before placing on the market (self-assessment for most Annex III systems; third-party assessment for biometric identification and certain Annex I products).
• EU declaration of conformity and CE marking.
• Registration in the EU database for high-risk AI systems.
• Post-market monitoring system and serious incident reporting.
3. Limited Risk (Transparency Obligations)
These AI systems pose moderate risks primarily related to transparency and are subject to specific transparency obligations. They include:
• AI systems that interact with natural persons (e.g., chatbots) — users must be informed they are interacting with an AI system.
• Emotion recognition systems and biometric categorization systems (where not prohibited or classified as high-risk) — individuals must be informed of the system's operation.
• AI systems that generate or manipulate content (deepfakes) — it must be disclosed that content has been artificially generated or manipulated. This applies to text published to inform the public on matters of public interest (must be labeled as AI-generated).
• General-purpose AI (GPAI) models have their own set of transparency requirements including technical documentation, compliance with EU copyright law, and publishing a sufficiently detailed summary of training content.
4. Minimal or No Risk
The vast majority of AI systems fall into this category. They are freely usable with no specific regulatory obligations under the EU AI Act. Examples include AI-enabled video games, spam filters, and inventory management systems. The EU AI Act encourages (but does not mandate) voluntary adoption of codes of conduct for these systems.
How the Framework Works in Practice
The classification process works as follows:
Step 1: Determine if the AI system falls under a prohibited practice. If yes, the system cannot be developed or deployed in the EU.
Step 2: If not prohibited, determine if the system is high-risk. Check if it is a safety component of a product under Annex I legislation, or if it falls into one of the Annex III use cases. Note: Even if listed in Annex III, a system is not considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights — for example, if the AI performs a narrow procedural task, improves the result of a previously completed human activity, detects decision-making patterns without replacing or influencing human assessment, or performs a preparatory task to an assessment. However, this exception does not apply if the AI system performs profiling of natural persons.
Step 3: If not high-risk, determine if transparency obligations apply. If the system interacts with humans, generates deepfakes, performs emotion recognition, or is a GPAI model, specific transparency rules apply.
Step 4: If none of the above apply, the system is minimal risk and no mandatory requirements apply under the Act.
Key Actors and Their Obligations:
• Providers (developers) — Bear the primary responsibility for conformity assessment, technical documentation, CE marking, post-market monitoring, and registration.
• Deployers (users of AI systems in a professional capacity) — Must use systems according to instructions, ensure human oversight, monitor operation, conduct fundamental rights impact assessments (for certain deployers of high-risk systems), and inform data subjects where required.
• Importers and distributors — Must verify conformity and documentation before placing systems on the market.
• Authorized representatives — Act on behalf of providers established outside the EU.
General-Purpose AI (GPAI) Models — A Special Category
GPAI models (e.g., foundation models, large language models) are regulated separately with a tiered approach:
• All GPAI model providers must comply with transparency requirements (technical documentation, copyright compliance, training data summaries).
• GPAI models with systemic risk (determined by cumulative compute power exceeding 10^25 FLOPs, or by Commission designation) face additional obligations including model evaluation, adversarial testing, cybersecurity protections, serious incident reporting, and energy consumption tracking.
• GPAI models released under free and open-source licenses benefit from some exemptions, unless they pose systemic risk.
Timeline for Implementation:
• February 2025: Prohibitions on unacceptable risk AI practices take effect.
• August 2025: GPAI model obligations and governance structures take effect.
• August 2026: Most high-risk AI system obligations (Annex III) take effect.
• August 2027: Obligations for high-risk AI systems embedded in regulated products (Annex I) take effect.
Penalties:
• Prohibited AI practices: Up to €35 million or 7% of global annual turnover.
• High-risk AI system non-compliance: Up to €15 million or 3% of global annual turnover.
• Supplying incorrect information to authorities: Up to €7.5 million or 1% of global annual turnover.
• Reduced caps for SMEs and startups.
Exam Tips: Answering Questions on EU AI Act Risk Classification Framework
1. Master the Four-Tier Structure: Be absolutely clear on the four risk levels — Unacceptable, High, Limited (Transparency), and Minimal. Many questions will test whether you can correctly classify a given scenario. Create a mental decision tree: Is it prohibited? → Is it high-risk? → Does it have transparency obligations? → If none, it's minimal risk.
2. Know the Prohibited Practices Cold: Memorize the specific prohibited practices. Exam questions often present a scenario and ask whether the AI system is banned. Key prohibited practices include social scoring, subliminal manipulation, exploitation of vulnerabilities, untargeted facial image scraping, workplace emotion recognition, and certain predictive policing uses. Remember that real-time remote biometric identification has narrow exceptions for law enforcement.
3. Understand High-Risk Categories Through Annex III Domains: You do not need to memorize every single use case, but you should know the eight domains of Annex III: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration/border control, and administration of justice. If a question mentions AI in hiring, credit scoring, or criminal risk assessment, think high-risk.
4. Remember the High-Risk Exception Clause: A system listed in Annex III is not high-risk if it performs a narrow procedural task, improves previously completed human activity, detects decision patterns without replacing human judgment, or performs a preparatory task — unless it involves profiling. This nuance is commonly tested.
5. Distinguish Between Providers and Deployers: Providers (developers) bear the heaviest obligations. Deployers must use systems according to instructions, ensure human oversight, and in some cases conduct fundamental rights impact assessments. Questions may test whether a particular obligation falls on the provider or deployer.
6. Know Transparency Obligations for Limited-Risk Systems: Chatbots must disclose they are AI. Deepfakes must be labeled. Emotion recognition systems must notify users. These are common exam scenarios — if you see a chatbot or synthetic content scenario, the answer likely involves a transparency obligation.
7. Understand GPAI Model Rules Separately: GPAI models are not classified under the four-tier system in the same way. They have their own chapter. Know the distinction between all GPAI models (transparency obligations) and GPAI models with systemic risk (additional obligations). The 10^25 FLOPs threshold is a key fact worth remembering.
8. Remember Key Penalty Amounts: Exam questions may test penalty structures: €35M/7% for prohibited practices, €15M/3% for high-risk non-compliance, and €7.5M/1% for providing incorrect information. Remember that the percentage is always of total worldwide annual turnover, and that the higher of the two figures applies.
9. Watch for Trick Questions on Scope: The EU AI Act applies to providers placing AI on the EU market regardless of where they are established, and to deployers located in the EU. It also applies when AI output is used in the EU, even if the provider and deployer are outside the EU. It does not apply to AI used exclusively for military or defense purposes, or for purely personal non-professional use.
10. Use Process of Elimination: If an exam question describes an AI system and asks you to classify it, systematically work through the tiers from top to bottom. Start by checking if it's prohibited. Then check Annex I and Annex III for high-risk. Then check transparency obligations. If none of these apply, select minimal risk. This methodical approach prevents errors under time pressure.
11. Link to Fundamental Rights: The EU AI Act is deeply rooted in fundamental rights protection. When uncertain about classification, ask yourself: does this AI system impact fundamental rights such as dignity, non-discrimination, privacy, or access to justice? If yes, it is likely high-risk or prohibited.
12. Understand Conformity Assessment Types: Most high-risk systems under Annex III require self-assessment by the provider. However, remote biometric identification systems and certain Annex I products require third-party conformity assessment by a notified body. This distinction is frequently tested.
13. Remember the Phased Timeline: Questions may ask when specific provisions take effect. Remember the sequence: prohibitions first (February 2025), then GPAI (August 2025), then Annex III high-risk (August 2026), then Annex I high-risk (August 2027). The pattern follows risk severity — most dangerous provisions apply first.
14. Practice with Scenarios: The most effective way to prepare is to practice classifying AI systems under the framework. For each scenario, identify: the risk level, the relevant obligations, who bears responsibility (provider vs. deployer), and what penalties could apply for non-compliance. This builds the analytical skill the exam tests.
Go Premium
Artificial Intelligence Governance Professional Preparation Package (2025)
- 3360 Superior-grade Artificial Intelligence Governance Professional practice questions.
- Accelerated Mastery: Deep dive into critical topics to fast-track your mastery.
- Unlock Effortless AIGP preparation: 5 full exams.
- 100% Satisfaction Guaranteed: Full refund with no questions if unsatisfied.
- Bonus: If you upgrade now you get upgraded access to all courses
- Risk-Free Decision: Start with a 7-day free trial - get premium features at no cost!