Implementation Guide

From awareness to enforcement-ready.

A 5-phase, 90-day roadmap to Article 50 compliance. Designed for GPAI System Providers and Deployers who integrate third-party AI into user-facing products.

Phase 1 · Inventory Phase 2 · Classify Phase 3 · Implement Phase 4 · Document Phase 5 · Monitor
Phase 1 · Days 1–14

Inventory every AI touchpoint.

You cannot comply with what you cannot see. Most companies under-count their AI surface by 2–3×.

Where to look

  • Marketing stack: content generation tools, image creators, copy assistants
  • Customer support: chatbots, auto-reply systems, ticket routing, sentiment analysis
  • Product backend: recommendation engines, scoring models, personalization
  • Sales ops: lead scoring, email generators, voice cloning for outreach
  • HR: CV screening, interview assistants, employee monitoring (watch Art. 5 prohibitions)
  • Media production: voiceovers, video dubbing, thumbnail generators

What to record per system

FieldExample
System nameIntercom Fin AI Bot
Vendor / modelOpenAI GPT-4 via Intercom API
Use caseCustomer support tier-1 automation
Data processedCustomer name, email, conversation history
User-facing?Yes — chat widget on all product pages
Content generated?Yes — text replies to customers
Role classificationDeployer + GPAI System Provider
Phase 2 · Days 15–28

Classify role and risk.

Same organization can be Provider for one system and Deployer for another. Each classification has distinct obligations.

Role classification

  • Model Provider — you trained the foundation model (rare for SMEs)
  • GPAI System Provider — you integrate third-party models via API into user-facing product
  • Deployer — you use AI professionally to produce or distribute content or decisions
  • Importer / Distributor — you bring third-party AI systems into the EU market

Risk classification

  • Prohibited (Art. 5) — social scoring, manipulation, emotion recognition at workplace/school. €35M / 7%
  • High-risk (Annex III) — HR decisions, credit scoring, education, critical infrastructure
  • Limited-risk (Art. 50) — chatbots, generative AI, deepfakes. Transparency only. €15M / 3%
  • Minimal-risk — spam filters, recommendation ordering. No obligations.
Phase 3 · Days 29–60

Implement the four disclosures.

3.1 Chatbot disclosure — Art. 50(1)

The chat widget or voice assistant must inform the user they are interacting with AI — at first interaction, not in a ToS deep-link.

  • Opening greeting identifies AI
  • Persistent label in widget header
  • Accessible via aria-label
  • Voice clones: spoken disclosure before content

3.2 Content marking — Art. 50(2)

Any output generated by AI must be machine-readable as AI-generated. No single technique suffices.

  • Metadata: C2PA Content Credentials signed with your key
  • Watermark: Invisible pixel/audio signal (SynthID, StableSignature)
  • Fingerprint: Cryptographic hash in provenance database
  • HTML signals: Structured data, ai.json, alt attributes

3.3 Biometric notice — Art. 50(3)

If you use camera-based emotion detection, face recognition, or behavioral categorisation — inform users before exposure.

  • Pre-flight modal before camera activation
  • GDPR lawful basis documented in privacy policy
  • Check Art. 5 prohibition on workplace/school emotion recognition first

3.4 Deepfake & public-interest label — Art. 50(4)

Deployer obligation, cannot be delegated to model provider.

  • Video: persistent visual indicator + opening disclaimer
  • Audio: spoken disclaimer at start; repeated for long-form
  • Images: visible badge ("cr" Content Credentials icon)
  • Text on public-interest matters: disclosure unless editorial human review documented
Phase 4 · Days 61–75

Document everything.

Under Art. 99(5), incorrect or misleading information to authorities triggers its own €7.5M / 1% penalty. Documentation is protection.

Required records

  • AI systems registry (equivalent to GDPR RoPA)
  • Technical specifications of marking implementations (logs, test reports)
  • Vendor contracts with marking / anti-tamper clauses
  • AI literacy training records (Art. 4 — applicable since 2 Feb 2025)
  • Internal SOP for identifying deepfakes in user-generated content
  • Incident response plan for disclosure failures
Phase 5 · Days 76–90 & ongoing

Monitor continuously.

  • Run this site's free audit monthly
  • Cross-check with AI Act Service Desk
  • Verify C2PA integrity on published assets quarterly
  • Refresh staff AI literacy training annually
  • Track Code of Practice updates (final version expected June 2026)
Quick Checklist

Pre-D-Day readiness check.

  1. Every chatbot has AI disclosure at first interaction
  2. Every AI-generated image/video/audio has C2PA + visible label
  3. No emotion recognition in workplace or schools
  4. Every deepfake is labeled with modality-specific method
  5. Every public-interest AI article has editorial review flag or disclosure
  6. AI systems registry complete and reviewed
  7. Staff completed AI literacy training
  8. Vendor contracts updated with marking clauses
  9. Privacy policy references GDPR and AI Act
  10. Incident response plan tested
Beyond the Free Tool

What a deep audit covers.

Our paid compliance engagement goes beyond client-side signals:

  • Backend AI pipeline review — every API call, every model, every output path
  • Vendor contract audit — marking clauses, anti-tamper, liability flow
  • Workflow mapping — from prompt to published output
  • Technical implementation of C2PA + invisible watermark + visible label stack
  • Integration with your existing CMS, DAM, or social publishing tools
  • Staff training delivery + completion tracking
  • Ongoing monitoring dashboard
Request Scoping Call