Risk Classification System
Jump to section
Why classification is the critical step
Everything you do regarding EU AI Act compliance starts and ends with risk classification. Wrong classification means either unnecessary costs (classifying something as high-risk when it is not) or legal risk (not classifying something as high-risk when it is).
In this lesson, we will go through each risk level in depth with concrete examples and you will learn to systematically classify your own AI systems.
Unacceptable risk — prohibited practices
Article 5 of the EU AI Act explicitly prohibits the following AI practices. Penalties for violations reach up to 35 million EUR or 7% of global turnover.
- Social scoring — evaluating people based on their social behavior with negative consequences unrelated to the context of data collection
- Manipulative AI — systems using subliminal techniques to manipulate behavior in ways that cause harm
- Exploiting vulnerabilities — AI targeting age groups, disabilities, or social situations for manipulation
- Biometric categorization — inferring sensitive attributes (race, political views, sexual orientation) from biometric data
- Untargeted facial scraping — collecting facial images from the internet or CCTV for facial recognition databases
- Emotion recognition — in workplaces and educational institutions (with exceptions for safety and healthcare)
- Real-time biometric identification — in public spaces for law enforcement purposes (with narrow exceptions)
Prohibited practices have been in effect since February 2025. If you operate anything on this list, you have a problem now — not in August 2026.
High risk — strictest requirements
High-risk AI systems are the core of the regulation. They fall into two categories:
Category 1: Safety components of products
AI systems that are safety components of products subject to existing EU regulation — medical devices, automotive safety, aviation systems, toys, lifts. If your AI system is part of a CE-marked product, it is likely high-risk.
Category 2: Standalone systems in sensitive areas
- Biometric identification and categorization of persons
- Management and operation of critical infrastructure (energy, water, transport)
- Education and vocational training (access to education, student assessment)
- Employment and worker management (hiring, performance evaluation, promotion)
- Access to essential services (credit scoring, insurance, social benefits)
- Law enforcement (evidence assessment, recidivism prediction, profiling)
- Migration and asylum management (application assessment, risk detection)
- Administration of justice and democratic processes (legal research, legal interpretation)
Limited risk — transparency obligations
Some AI systems do not have requirements as strict as high-risk, but must meet transparency obligations:
- Chatbots — users must be informed they are communicating with AI (not a human)
- Deepfakes — AI-generated or manipulated image/audio/video content must be labeled
- AI-generated text — text published to inform the public must be labeled as AI-generated
- Emotion recognition — users must be informed that the system analyzes their emotions (where permitted)
If you have a chatbot on your website, just add a clear notice: 'You are communicating with an AI assistant.' If you generate content with AI, label it. The technical implementation is simple — the legal obligation is clear.
Minimal risk — no specific obligations
Most AI systems fall into this category. Spam filters, recommendation systems, AI coding assistants, automatic translations, AI in games. The EU AI Act places no specific obligations on them, but recommends voluntary adoption of codes of conduct.
How to classify YOUR AI system
Use the following decision tree to classify each AI system in your portfolio:
1. Is the system on the list of prohibited practices (Art. 5)?
YES → Unacceptable risk → STOP OPERATION
NO → continue
2. Is the system a safety component of a CE-marked product?
YES → High risk
NO → continue
3. Does the system fall under Annex III (sensitive areas)?
YES → High risk (with possible exception*)
NO → continue
4. Does the system interact directly with users (chatbot, deepfake)?
YES → Limited risk (transparency obligations)
NO → Minimal risk
* Exception: An AI system in a sensitive area is NOT high-risk
if it does not perform profiling and its output is purely
assistive (a human always makes the decision).Gray areas — typical dilemmas for technical teams
Some systems are not clear-cut. A content recommendation system is minimal risk. But a recommendation system that determines what loan a customer gets? High risk. The difference is the impact on the person.
When you are not sure, ask yourself: 'Can the output of this AI system significantly affect someone's life, health, finances, or fundamental rights?' If yes, you are likely in high risk territory.
Take the list of AI systems from the previous lesson. For each system, go through the decision tree: 1. Run each system through the decision tree above 2. Document the classification rationale — why you chose that category 3. Identify systems in the 'gray zone' — where you are not certain 4. For gray zone systems, write arguments for and against a higher classification Create a table: System Name | Category | Rationale | Confidence (high/medium/low)
Hint
When in doubt, classify higher. It is better to meet requirements for a higher category than to be out of compliance. For systems with medium and low confidence, consider consulting a lawyer.
- Risk classification is the foundation of compliance — all obligations flow from the risk category
- Prohibited practices have been in effect since February 2025 — not 2026
- High-risk = safety components of CE products + standalone systems in sensitive areas
- Limited risk = transparency obligations (chatbots, deepfakes, AI content)
- When in doubt, classify higher — better to be overly cautious than non-compliant
In the next lesson, we dive into Technical Requirements for High-Risk AI — a technique that gives you a clear edge. Unlock the full course and continue now.
2/6 complete — keep going!