Skip to main content
    Classification & Risk Assessment
    8 min readUpdated 2025-01-28

    The Classification Engine Explained

    Deep dive into how Klarvo's classification engine determines risk levels and applicable obligations.

    The Classification Engine

    Klarvo's Classification Engine is the core logic that determines your AI system's risk level and applicable EU AI Act obligations.

    How Classification Works

    The engine processes your wizard answers through four sequential stages:

    Stage 1: AI System Definition → Is this even an AI system?
    

    Stage 2: Prohibited Screening → Any red flags under Article 5?

    Stage 3: High-Risk Screening → Annex III category match?

    Stage 4: Transparency Check → Article 50 obligations?

    Final Classification + Obligation Mapping

    Stage 1: AI System Definition

    First, we determine if your system meets the EU AI Act definition of an "AI system":

    Key criteria:

  1. Infers outputs from inputs to achieve objectives
  2. Produces predictions, recommendations, decisions, or content
  3. Operates with some degree of autonomy
  4. Uses ML, statistical, or logic-based approaches
  5. Possible outcomes:

  6. ✅ Likely AI System → Continue to Stage 2
  7. ❌ Likely Not AI System → Out of scope (still tracked)
  8. ⚠️ Needs Review → Flag for legal/compliance review
  9. Stage 2: Prohibited Practices (Article 5)

    Critical safety check against eight prohibited AI practices:

  10. Harmful manipulation/deception
  11. Exploitation of vulnerabilities
  12. Social scoring for unrelated decisions
  13. Criminal risk prediction via profiling alone
  14. Untargeted facial recognition database scraping
  15. Workplace/education emotion inference
  16. Biometric categorisation revealing protected characteristics
  17. Real-time remote biometric ID in public spaces (law enforcement)
  18. Possible outcomes:

  19. ✅ No indicators → Continue to Stage 3
  20. ⚠️ Potential prohibited → STOP - Legal review required
  21. ❓ Unsure on any → Flag for compliance review
  22. If any prohibited indicator is flagged, the system is classified as "Blocked" until cleared by legal review.

    Stage 3: High-Risk Screening (Annex III)

    Check against nine high-risk use case categories:

    CategoryExamples
    BiometricsFacial recognition, fingerprint matching
    Critical InfrastructureEnergy grid control, water systems
    EducationExam proctoring, admissions scoring
    EmploymentCV screening, performance evaluation
    Essential ServicesCredit scoring, insurance underwriting
    Law EnforcementEvidence analysis, risk assessment
    MigrationVisa processing, asylum decisions
    JusticeSentencing recommendations, case triage
    Safety ComponentsMedical device AI, vehicle ADAS

    Possible outcomes:

  23. ✅ No matches → Continue to Stage 4
  24. ⚠️ High-Risk Candidate → Deployer obligations apply
  25. 🏭 Safety Component → Provider obligations may apply
  26. Stage 4: Transparency Check (Article 50)

    Evaluate transparency disclosure requirements:

  27. Direct AI interaction → Must inform unless obvious
  28. Synthetic content generation → Machine-readable marking
  29. Emotion recognition → Must inform affected persons
  30. Deepfake generation → Must disclose artificial nature
  31. Public-interest text → Disclosure unless editorial control
  32. Final Classification Output

    Based on all stages, systems are classified:

    LevelMeaningObligations
    Minimal RiskNo specific EU AI Act requirementsBest practices only
    Limited RiskTransparency obligations applyArticle 50 disclosures
    High-Risk CandidateFull deployer dutiesArticle 26 + controls
    BlockedPotential prohibited practiceLegal review + escalation

    Classification Confidence

    Each classification includes a confidence level:

  33. High Confidence: Clear answers, unambiguous category
  34. Medium Confidence: Some uncertainty, recommend review
  35. Low Confidence: Multiple unsure answers, requires expert input
  36. Audit Trail

    Every classification decision is logged with:

  37. All questions asked and answers given
  38. Decision path taken
  39. Classification rationale
  40. Reviewer (if any) and approval date
  41. Version history for re-classifications