Prohibited AI Practices (Article 5)
The EU AI Act bans 8 categories of AI practices outright. These rules applied from 2 February 2025 and carry the highest penalties. Know what's banned.
These Rules Apply Now
Prohibited practice rules applied from 2 February 2025. If you're using AI systems that fall into these categories, you must stop immediately. Penalties can reach €35 million or 7% of global turnover.
The 8 Prohibited Practices
Harmful Manipulation/Deception
AI systems using subliminal, manipulative, or deceptive techniques that distort behavior and cause significant harm.
Prohibited examples:
- Dark patterns designed to manipulate purchasing decisions causing financial harm
- Systems exploiting cognitive biases to influence political views harmfully
Not covered (likely OK):
- Standard marketing personalization
- Transparent recommendation systems
Exploitation of Vulnerabilities
AI exploiting vulnerabilities due to age, disability, or social/economic situation causing significant harm.
Prohibited examples:
- Predatory lending AI targeting financially distressed individuals
- Gambling AI targeting addiction-prone users
Not covered (likely OK):
- Accessibility features for disabled users
- Age-appropriate content filtering
Social Scoring
Evaluating/classifying people based on social behavior for unrelated decisions leading to detrimental treatment.
Prohibited examples:
- Denying insurance based on social media activity
- Employment decisions based on personal lifestyle choices
Not covered (likely OK):
- Credit scoring based on financial history
- Background checks for relevant roles
Criminal Risk Profiling
Predicting criminal offense risk based solely on profiling or personality traits (not actual criminal behavior).
Prohibited examples:
- Pre-crime prediction based on demographics
- Risk scoring based purely on personality assessments
Not covered (likely OK):
- Recidivism assessment based on criminal history
- Evidence-based risk assessment in justice
Facial Recognition Database Scraping
Creating/expanding facial recognition databases through untargeted scraping of images from internet or CCTV.
Prohibited examples:
- Scraping social media for facial recognition training
- Mass CCTV capture for identity databases
Not covered (likely OK):
- User-uploaded photos with consent
- Targeted law enforcement with warrants
Workplace/Education Emotion Inference
Inferring emotions in workplace or educational settings (with limited medical/safety exceptions).
Prohibited examples:
- Monitoring employee emotions during meetings
- Assessing student engagement via emotional analysis
Not covered (likely OK):
- Medical emotion detection for safety
- Driver fatigue detection for transport safety
Sensitive Biometric Categorisation
Categorising people based on biometric data to infer protected characteristics.
Prohibited examples:
- Determining race or religion from facial features
- Inferring sexual orientation from biometric analysis
Not covered (likely OK):
- Age verification for content access
- Gender-neutral health diagnostics
Real-time Remote Biometric ID
Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions).
Prohibited examples:
- Mass surveillance facial recognition in city centers
- Real-time identification at public events
Not covered (likely OK):
- Post-event investigation with authorization
- Border control with appropriate safeguards
Frequently Asked Questions
When did prohibited practice rules apply?
Prohibited practice rules applied from 2 February 2025, making this one of the earliest enforcement dates in the EU AI Act. Organizations should have already ceased any prohibited uses.
What are the penalties for prohibited practices?
Violations of prohibited practices face the highest penalties under the AI Act: up to €35 million or 7% of annual worldwide turnover, whichever is higher.
How do I know if my AI system is prohibited?
Use our prohibited practices screening tool to check your systems. If any indicators are found, stop using the system immediately and seek legal advice. When in doubt, err on the side of caution.
Are there any exceptions to prohibited practices?
Some narrow exceptions exist, particularly for law enforcement uses under strict conditions. However, these are very limited and typically require judicial authorization and specific circumstances.
Screen Your AI Systems Now
Use our free screening tool to check for prohibited practices and document your compliance.