Skip to main content
    Classification & Risk Assessment
    7 min readUpdated 2025-01-27

    Prohibited Practices Screening

    Understanding the eight prohibited AI practices under Article 5 and how Klarvo screens for them.

    Prohibited Practices Screening

    Article 5 of the EU AI Act prohibits certain AI practices that pose unacceptable risks to fundamental rights. Klarvo screens every AI system against these eight prohibitions.

    The Eight Prohibited Practices

    #### 1. Harmful Manipulation or Deception

    Prohibition: AI systems that deploy subliminal, manipulative, or deceptive techniques to distort behavior and cause significant harm.

    Examples:

  1. Dark patterns exploiting cognitive biases
  2. Subliminal advertising manipulation
  3. Deceptive chatbots designed to extract sensitive information
  4. Screening question: Does the system use subliminal, manipulative, or deceptive techniques likely to distort behaviour and cause significant harm?


    #### 2. Exploitation of Vulnerabilities

    Prohibition: AI that exploits vulnerabilities of specific groups (age, disability, socio-economic situation) in ways likely to cause significant harm.

    Examples:

  5. Predatory lending targeting cognitive impairments
  6. Gambling systems exploiting addiction vulnerabilities
  7. Scams targeting elderly users
  8. Screening question: Does it exploit vulnerabilities (age, disability, socio-economic situation) in a way likely to cause significant harm?


    #### 3. Social Scoring

    Prohibition: AI systems evaluating or classifying people based on social behavior or personality characteristics for unrelated decisions, leading to detrimental treatment.

    Examples:

  9. Citizenship scoring affecting access to services
  10. General trustworthiness ratings
  11. Social media behavior affecting credit decisions
  12. Screening question: Does it do "social scoring" of individuals for unrelated context decisions?


    #### 4. Criminal Risk Prediction via Profiling

    Prohibition: AI predicting criminal offence risk based solely on profiling or personality traits (without additional objective evidence).

    Examples:

  13. Predictive policing based on demographics
  14. Pre-crime assessment via personality analysis
  15. Risk scoring without behavioral indicators
  16. Screening question: Does it assess/predict risk of committing criminal offences based solely on profiling/personality traits?


    #### 5. Untargeted Facial Recognition Scraping

    Prohibition: Creating or expanding facial recognition databases via untargeted scraping from internet or CCTV.

    Examples:

  17. Clearview AI-style mass scraping
  18. Building facial databases without consent
  19. Harvesting social media photos for FR
  20. Screening question: Does it create/expand facial recognition databases via untargeted scraping (internet/CCTV)?


    #### 6. Workplace/Education Emotion Inference

    Prohibition: Inferring emotions in workplace or educational settings (with specific exceptions for medical/safety purposes).

    Examples:

  21. Employee sentiment monitoring
  22. Student engagement emotion detection
  23. Interview emotion analysis
  24. Screening question: Does it infer emotions of people in a workplace or education institution?


    #### 7. Biometric Categorisation Revealing Protected Characteristics

    Prohibition: Biometric categorisation systems inferring race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation.

    Examples:

  25. Race inference from facial analysis
  26. Political affiliation prediction
  27. Sexual orientation classification
  28. Screening question: Does it do biometric categorisation that could reveal sensitive/protected characteristics?


    #### 8. Real-time Remote Biometric Identification (Law Enforcement)

    Prohibition: Real-time biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions).

    Examples:

  29. Live facial recognition surveillance
  30. Crowd scanning for suspects
  31. Public space biometric monitoring
  32. Screening question: Does it use real-time remote biometric identification in publicly accessible spaces for law enforcement purposes?


    If You Flag a Prohibition

    If any screening question is answered "Yes" or "Unsure":

  33. Immediate escalation: System marked "Blocked"
  34. Task created: "Legal review — prohibited practices"
  35. No further classification: Must be cleared first
  36. Documentation required: Context + safeguards explanation
  37. False Positives

    Not all flags indicate actual prohibitions. Context matters:

  38. Emotion AI for medical purposes may be exempt
  39. Biometric categorisation for specific security contexts may be allowed
  40. Law enforcement exceptions exist for serious crimes
  41. Always document context and get legal sign-off for edge cases.