Annex III High-Risk Categories: A Practical Guide for SMEs
Deep dive into each Annex III category with real-world examples. Understand if your AI systems qualify as high-risk and what obligations apply.
Annex III High-Risk Categories: A Practical Guide for SMEs
Understanding whether your AI system is "high-risk" under the EU AI Act is crucial—it determines whether you face significant compliance obligations or relatively light-touch requirements.
The High-Risk Categories
Annex III of the EU AI Act lists specific use cases that automatically qualify as high-risk. Let's break them down with practical examples.
1. Biometrics
What's covered: AI systems intended for biometric identification and categorization of natural persons, emotion recognition systems.
Examples:
- Facial recognition for access control
- Voice recognition for identity verification
- Emotion detection in customer service calls
Not covered: Simple photo tagging in consumer apps (though transparency obligations may apply).
2. Critical Infrastructure
What's covered: AI systems intended as safety components in the management and operation of critical infrastructure.
Examples:
- AI managing power grid distribution
- Traffic control systems
- Water treatment optimization
3. Education and Vocational Training
What's covered: AI systems determining access to education, assessing students, or detecting prohibited behavior.
Examples:
- Automated exam grading
- Admission decision support systems
- Plagiarism detection with consequences
- Proctoring software
4. Employment, Workers Management, and Access to Self-Employment
What's covered: AI systems for recruitment, work-related decisions, monitoring, and performance evaluation.
Examples:
- CV screening tools
- Interview analysis software
- Performance prediction systems
- Automated scheduling based on productivity metrics
5. Access to Essential Services
What's covered: AI for credit scoring, insurance pricing, emergency services dispatch, and social benefits evaluation.
Examples:
- Credit decisioning algorithms
- Insurance risk assessment
- Emergency call prioritization
6. Law Enforcement
What's covered: Polygraphs, evidence evaluation, crime prediction, and profiling.
Examples:
- Predictive policing tools
- Evidence analysis systems
7. Migration, Asylum, and Border Control
What's covered: Risk assessment, document verification, and application processing.
8. Administration of Justice and Democratic Processes
What's covered: AI assisting judicial decisions or influencing election outcomes.
What This Means for You
If your AI system falls into any of these categories, you'll need to:
- Register in the EU database (in some cases)
- Implement quality management systems
- Conduct conformity assessments
- Maintain human oversight
- Keep detailed documentation
Need Help Classifying?
Use Klarvo's High-Risk Checker to quickly assess your AI systems against Annex III categories.
Get More Insights
Subscribe to receive the latest EU AI Act updates and compliance tips.