High-Risk Categories (Annex III)
Annex III of the EU AI Act lists specific use cases where AI systems are considered "high-risk" and subject to extensive compliance requirements.
Category 1: Biometrics
Scope: AI systems used for biometric identification, categorisation, or verification.
Covered uses:
Remote biometric identification
Biometric categorisation (not prohibited)
Real-time biometric verification
Emotion recognition (beyond workplace/education)
Examples:
Airport facial recognition gates
Age verification systems
Access control biometrics
Customer identification in banking
Category 2: Critical Infrastructure
Scope: AI as safety components of critical infrastructure management.
Covered uses:
Energy grid management
Water supply control
Digital infrastructure
Road traffic management
Examples:
Smart grid load balancing
SCADA system AI components
Network traffic optimization
Traffic light AI control
Category 3: Education & Vocational Training
Scope: AI affecting educational access, assessment, or opportunities.
Covered uses:
Admissions decisions
Student assessment/grading
Learning outcome evaluation
Proctoring and cheating detection
Educational resource access
Examples:
University admission scoring
Automated essay grading
Exam proctoring systems
Learning management AI
Category 4: Employment & Worker Management
Scope: AI systems affecting employment decisions and worker management.
Covered uses:
Recruitment and screening
Job advertising targeting
CV/application filtering
Interview assessment
Performance evaluation
Task allocation
Promotion/termination decisions
Workplace monitoring
Examples:
ATS candidate ranking
Video interview analysis
Performance management AI
Shift scheduling systems
Productivity monitoring
Category 5: Essential Private & Public Services
Scope: AI affecting access to essential services and benefits.
Covered uses:
Creditworthiness assessment
Credit scoring
Risk assessment for insurance (life/health)
Emergency service dispatch
Public benefit eligibility
Social service allocation
Healthcare access prioritization
Examples:
Bank loan decisioning
Insurance premium calculation
Emergency 911/112 triage
Benefit fraud detection
Hospital bed allocation
Category 6: Law Enforcement
Scope: AI supporting law enforcement activities.
Covered uses:
Evidence reliability assessment
Crime risk assessment (polygraph, etc.)
Profiling during investigation
Lie detection
Crime pattern prediction
Deep fake detection in evidence
Examples:
Forensic evidence analysis
Risk assessment tools for courts
Investigative AI assistants
Predictive analytics (location-based)
Category 7: Migration, Asylum & Border Control
Scope: AI in immigration and border management contexts.
Covered uses:
Polygraph/similar tools
Security/health/environmental risk assessment
Visa/asylum application processing
Document verification
Petition/complaint examination
Irregular migration risk assessment
Examples:
Visa decision support
Asylum case prioritization
Border crossing risk assessment
Document fraud detection
Category 8: Administration of Justice & Democratic Processes
Scope: AI assisting judicial and democratic institutions.
Covered uses:
Researching legal facts
Applying law to facts
Alternative dispute resolution
Election result influence (advertising excepted)
Voting behavior influence
Examples:
Legal research AI
Sentencing recommendation systems
Contract analysis tools
Voter registration assistance
Category 9: Safety Components of Regulated Products
Scope: AI that is a safety component of products covered by EU product legislation (Annex I).
Covered sectors:
Medical devices
Motor vehicles
Aviation
Marine equipment
Toys
Machinery
Lifts
Personal protective equipment
Examples:
ADAS/autonomous driving
Medical diagnostic AI
Industrial robot safety
Drone flight control
Implications of High-Risk Classification
If your AI system matches any Annex III category, you become subject to:
For Deployers (Article 26):
Use according to provider instructions
Assign competent human oversight
Ensure input data quality
Monitor operation
Keep logs ≥ 6 months
Report incidents
Conduct FRIA (if applicable)
For Providers (Chapter III, Section 2):
Risk management system
Data governance
Technical documentation
Logging capability
Transparency & user info
Human oversight design
Accuracy & robustness
Conformity assessment
Registration in EU database