Skip to main content
    Back to Blog
    Regulation

    Transparency Obligations: When and How to Disclose AI Use

    From chatbots to deepfakes to emotion recognition—understand your transparency obligations under Article 50.

    James RobertsonJanuary 3, 202511 min read

    Transparency Obligations: When and How to Disclose AI Use

    Article 50 of the EU AI Act establishes transparency obligations that apply regardless of risk classification. If your AI interacts with people, generates content, or makes inferences about emotions, you likely have disclosure requirements.

    The Four Transparency Scenarios

    1. AI Interaction Disclosure

    When it applies: AI systems designed to interact directly with natural persons.

    What to do: Inform persons that they are interacting with an AI system, unless this is obvious from the circumstances.

    Example notices:

    • "You're chatting with our AI assistant"
    • "This response was generated by artificial intelligence"

    Exceptions: If a reasonable person would obviously know it's AI (e.g., clearly labeled chatbot interface).

    2. Synthetic Content Marking

    When it applies: AI systems that generate synthetic audio, image, video, or text content.

    What to do:

    • Providers must ensure outputs are marked as artificially generated (machine-readable)
    • Deployers must disclose that content is AI-generated

    Example: AI-generated marketing images should be labeled "Created with AI"

    3. Emotion Recognition / Biometric Categorization

    When it applies: AI systems that do emotion recognition or biometric categorization.

    What to do: Inform exposed persons that the system is in operation.

    Example: "This call may be analyzed to detect customer sentiment"

    4. Deep Fake Disclosure

    When it applies: AI systems that generate or manipulate image, audio, or video content constituting a deep fake.

    What to do: Disclose that the content has been artificially generated or manipulated.

    Exceptions:

    • Artistic, satirical, or fictional works (with appropriate safeguards)
    • Law enforcement purposes in specific circumstances

    Implementation Checklist

    For each AI system:

    • [ ] Does it interact directly with people?
    • [ ] Does it generate synthetic content?
    • [ ] Does it recognize emotions or categorize biometrics?
    • [ ] Does it create deep fakes?

    For each "yes":

    • [ ] Design appropriate disclosure
    • [ ] Implement disclosure mechanism
    • [ ] Document evidence of disclosure
    • [ ] Review accessibility of notices

    Sample Transparency Notices

    Get our Transparency Notice Pack with ready-to-use disclosure templates for each scenario.

    Share this article

    Get More Insights

    Subscribe to receive the latest EU AI Act updates and compliance tips.

    Related Articles