What the EU AI Act Requires
Jump to section
Why the EU AI Act changes the rules
The EU AI Act (Regulation 2024/1689 of the European Parliament and Council) is the world's first comprehensive legal framework for regulating artificial intelligence. It takes effect gradually, and the key deadline — August 2026 — covers the majority of obligations that directly impact technical teams.
If your team develops, deploys, or uses AI systems in the EU, this regulation applies to you. It does not matter whether you are a five-person startup or a corporation with thousands of employees. What matters is the type of AI system you operate.
The EU AI Act does not only apply to companies headquartered in the EU. It applies to anyone who offers AI systems on the EU market or uses the output of AI systems in the EU. If European users use your application, the regulation applies to you.
Timeline
The EU AI Act entered into force on August 1, 2024, but takes effect in stages. As a technical professional, you need to know the key milestones:
- February 2025 — Ban on unacceptable AI practices (social scoring, manipulative AI, unauthorized real-time biometric identification)
- August 2025 — Rules for general-purpose AI models (GPT, Claude, Gemini) — mainly affects model providers
- August 2026 — Main deadline: requirements for high-risk AI systems, deployer obligations, transparency requirements
- August 2027 — Requirements for high-risk AI systems in regulated sectors (healthcare, finance, transport)
August 2026 is your primary deadline. By then, you must have your AI systems classified, documented, and technically compliant. Do not start in July 2026 — implementation takes months.
Risk categories — the foundation of everything
The entire EU AI Act is built on one concept: risk categories. Every AI system falls into one of four risk levels, and all obligations flow from that classification.
Unacceptable risk (prohibited)
AI systems that are completely banned in the EU. Social scoring, manipulative techniques, untargeted real-time biometric identification. If you are doing any of these, stop. No compliance checklist will help you.
High risk
AI systems with significant impact on people — hiring decisions, credit scoring, medical diagnostics, safety systems. These systems have the strictest requirements: data governance, transparency, human oversight, accuracy, robustness.
Limited risk
AI systems with transparency obligations — chatbots, deepfakes, AI-generated content. Users must be informed that they are interacting with AI or that content was AI-generated.
Minimal risk
Most common AI applications — spam filters, content recommendation systems, AI coding assistants. No specific obligations under the AI Act, but general legal requirements still apply.
Who it applies to — roles in the ecosystem
The EU AI Act defines three main roles with different levels of responsibility:
- Provider — develops the AI system or places it on the market. Highest level of responsibility. If you build your own ML model or AI product, you are a provider.
- Deployer — uses an AI system under their own authority. Medium level of responsibility. If you integrate the ChatGPT API into your application, you are a deployer.
- Importer/Distributor — imports or distributes AI systems in the EU. Relevant for resellers and system integrators.
Most technical teams are deployers — you use AI models (Claude, GPT, Gemini) in your applications. But if you have fine-tuned a model or built your own, the stricter provider role may apply to you.
What this means for your team today
No need to panic, but you do need to start. The first step is an audit — identify all AI systems your team develops or uses. For each one, determine the risk category and your role (provider vs. deployer). That is the foundation on which you will build your compliance strategy.
Create a list of all AI systems your team develops, deploys, or uses. For each system, write down: 1. Name and purpose of the system 2. What AI model it uses (custom, fine-tuned, third-party API) 3. Who is affected by the system's output (employees, customers, public) 4. Which risk category the system likely falls into 5. Your role (provider or deployer) If you have more than 3 systems in the high-risk category, prioritize them by deployment date.
Hint
Do not forget about 'background' AI systems — automatic recommendation algorithms, AI in internal HR processes, customer support chatbots. We often do not realize how many AI systems we already use.
- The EU AI Act is the first comprehensive AI regulation — it applies to anyone offering or using AI in the EU
- The key deadline is August 2026 for most obligations
- The risk category system (unacceptable, high, limited, minimal) determines your obligations
- Most technical teams are deployers — you use third-party AI models
- First step: audit all AI systems and classify them