Prohibited Practices Screening
Article 5 of the EU AI Act prohibits certain AI practices that pose unacceptable risks to fundamental rights. Klarvo screens every AI system against these eight prohibitions.
The Eight Prohibited Practices
#### 1. Harmful Manipulation or Deception
Prohibition: AI systems that deploy subliminal, manipulative, or deceptive techniques to distort behavior and cause significant harm.
Examples:
Screening question: Does the system use subliminal, manipulative, or deceptive techniques likely to distort behaviour and cause significant harm?
#### 2. Exploitation of Vulnerabilities
Prohibition: AI that exploits vulnerabilities of specific groups (age, disability, socio-economic situation) in ways likely to cause significant harm.
Examples:
Screening question: Does it exploit vulnerabilities (age, disability, socio-economic situation) in a way likely to cause significant harm?
#### 3. Social Scoring
Prohibition: AI systems evaluating or classifying people based on social behavior or personality characteristics for unrelated decisions, leading to detrimental treatment.
Examples:
Screening question: Does it do "social scoring" of individuals for unrelated context decisions?
#### 4. Criminal Risk Prediction via Profiling
Prohibition: AI predicting criminal offence risk based solely on profiling or personality traits (without additional objective evidence).
Examples:
Screening question: Does it assess/predict risk of committing criminal offences based solely on profiling/personality traits?
#### 5. Untargeted Facial Recognition Scraping
Prohibition: Creating or expanding facial recognition databases via untargeted scraping from internet or CCTV.
Examples:
Screening question: Does it create/expand facial recognition databases via untargeted scraping (internet/CCTV)?
#### 6. Workplace/Education Emotion Inference
Prohibition: Inferring emotions in workplace or educational settings (with specific exceptions for medical/safety purposes).
Examples:
Screening question: Does it infer emotions of people in a workplace or education institution?
#### 7. Biometric Categorisation Revealing Protected Characteristics
Prohibition: Biometric categorisation systems inferring race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation.
Examples:
Screening question: Does it do biometric categorisation that could reveal sensitive/protected characteristics?
#### 8. Real-time Remote Biometric Identification (Law Enforcement)
Prohibition: Real-time biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions).
Examples:
Screening question: Does it use real-time remote biometric identification in publicly accessible spaces for law enforcement purposes?
If You Flag a Prohibition
If any screening question is answered "Yes" or "Unsure":
False Positives
Not all flags indicate actual prohibitions. Context matters:
Always document context and get legal sign-off for edge cases.