High-Risk AI Systems (Annex III)
The EU AI Act defines 8 categories of high-risk AI in Annex III. If your AI falls into these categories, you have specific obligations as a deployer. Here's what you need to know.
Understanding High-Risk Classification
High-risk AI systems are those that pose significant risks to health, safety, or fundamental rights. The EU AI Act identifies these through two mechanisms:
Annex III Categories
Eight specific use-case areas defined in the Act: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice/democracy.
Product Safety Components
AI systems that are safety components of products already covered by EU harmonization legislation (medical devices, machinery, vehicles, etc.).
The 8 Annex III Categories
Biometrics
Remote biometric identification and biometric categorisation of natural persons
- Facial recognition for identification
- Biometric categorisation systems
- Emotion recognition in certain contexts
Critical Infrastructure
AI as safety components in management/operation of critical infrastructure
- Energy grid management
- Water supply systems
- Transport infrastructure
- Digital infrastructure
Education & Training
AI used in education and vocational training contexts
- Student assessment and scoring
- Admissions decisions
- Exam proctoring
- Learning progress evaluation
Employment & Workers
AI in employment, worker management, and self-employment access
- CV screening and recruitment
- Interview analysis
- Performance evaluation
- Task allocation
- Workforce monitoring
Essential Services
AI affecting access to essential private and public services
- Credit scoring
- Insurance pricing
- Emergency services dispatch
- Healthcare access
- Social benefits eligibility
Law Enforcement
AI used by law enforcement authorities
- Evidence assessment
- Risk of offending/reoffending
- Lie detection (polygraph)
- Criminal profiling
Migration & Border
AI in migration, asylum, and border control
- Visa application assessment
- Border patrol systems
- Asylum claim processing
- Security risk assessment
Justice & Democracy
AI in justice administration and democratic processes
- Case outcome research
- Judicial decision support
- Election-related systems
- Democratic process influence detection
Key Deployer Obligations
If you deploy high-risk AI, you must:
- Use the system according to provider instructions
- Assign human oversight to competent persons with authority to intervene
- Ensure input data is relevant and representative (if under your control)
- Monitor the system's operation and inform provider of risks
- Keep logs under your control for at least 6 months
- Inform workers before using AI that affects them
- Report serious incidents to providers and authorities
- Complete a FRIA if you're a public body or provide public services
Frequently Asked Questions
What makes an AI system 'high-risk'?
An AI system is high-risk if it falls into one of the Annex III categories (biometrics, employment, credit, etc.) or is a safety component of a product covered by EU harmonization legislation (e.g., medical devices, vehicles).
When do high-risk obligations apply?
Most high-risk AI obligations apply from 2 August 2026. However, for AI systems that are safety components of products already covered by EU law (Annex I), there's an extended transition until 2 August 2027.
What are the main deployer obligations for high-risk AI?
Deployers must: use systems according to instructions, assign competent human oversight, ensure input data relevance, monitor operation, report risks and incidents, keep logs for 6+ months, and inform workers (where applicable).
Do I need a FRIA for high-risk AI?
A Fundamental Rights Impact Assessment (FRIA) is required for public bodies and certain private entities using high-risk AI, particularly those affecting individuals' rights. Use our FRIA tool to check your requirements.
How do I know if my AI is high-risk?
Use our High-Risk Checker tool to assess your AI systems against Annex III categories. The tool asks targeted questions to determine if your use case triggers high-risk classification.
Manage High-Risk AI Compliance
Klarvo helps you classify, control, and evidence compliance for all your high-risk AI systems.