From awareness to enforcement-ready.
A 5-phase, 90-day roadmap to Article 50 compliance. Designed for GPAI System Providers and Deployers who integrate third-party AI into user-facing products.
Inventory every AI touchpoint.
You cannot comply with what you cannot see. Most companies under-count their AI surface by 2–3×.
Where to look
- Marketing stack: content generation tools, image creators, copy assistants
- Customer support: chatbots, auto-reply systems, ticket routing, sentiment analysis
- Product backend: recommendation engines, scoring models, personalization
- Sales ops: lead scoring, email generators, voice cloning for outreach
- HR: CV screening, interview assistants, employee monitoring (watch Art. 5 prohibitions)
- Media production: voiceovers, video dubbing, thumbnail generators
What to record per system
| Field | Example |
|---|---|
| System name | Intercom Fin AI Bot |
| Vendor / model | OpenAI GPT-4 via Intercom API |
| Use case | Customer support tier-1 automation |
| Data processed | Customer name, email, conversation history |
| User-facing? | Yes — chat widget on all product pages |
| Content generated? | Yes — text replies to customers |
| Role classification | Deployer + GPAI System Provider |
Classify role and risk.
Same organization can be Provider for one system and Deployer for another. Each classification has distinct obligations.
Role classification
- Model Provider — you trained the foundation model (rare for SMEs)
- GPAI System Provider — you integrate third-party models via API into user-facing product
- Deployer — you use AI professionally to produce or distribute content or decisions
- Importer / Distributor — you bring third-party AI systems into the EU market
Risk classification
- Prohibited (Art. 5) — social scoring, manipulation, emotion recognition at workplace/school. €35M / 7%
- High-risk (Annex III) — HR decisions, credit scoring, education, critical infrastructure
- Limited-risk (Art. 50) — chatbots, generative AI, deepfakes. Transparency only. €15M / 3%
- Minimal-risk — spam filters, recommendation ordering. No obligations.
Implement the four disclosures.
3.1 Chatbot disclosure — Art. 50(1)
The chat widget or voice assistant must inform the user they are interacting with AI — at first interaction, not in a ToS deep-link.
- Opening greeting identifies AI
- Persistent label in widget header
- Accessible via aria-label
- Voice clones: spoken disclosure before content
3.2 Content marking — Art. 50(2)
Any output generated by AI must be machine-readable as AI-generated. No single technique suffices.
- Metadata: C2PA Content Credentials signed with your key
- Watermark: Invisible pixel/audio signal (SynthID, StableSignature)
- Fingerprint: Cryptographic hash in provenance database
- HTML signals: Structured data, ai.json, alt attributes
3.3 Biometric notice — Art. 50(3)
If you use camera-based emotion detection, face recognition, or behavioral categorisation — inform users before exposure.
- Pre-flight modal before camera activation
- GDPR lawful basis documented in privacy policy
- Check Art. 5 prohibition on workplace/school emotion recognition first
3.4 Deepfake & public-interest label — Art. 50(4)
Deployer obligation, cannot be delegated to model provider.
- Video: persistent visual indicator + opening disclaimer
- Audio: spoken disclaimer at start; repeated for long-form
- Images: visible badge ("cr" Content Credentials icon)
- Text on public-interest matters: disclosure unless editorial human review documented
Document everything.
Under Art. 99(5), incorrect or misleading information to authorities triggers its own €7.5M / 1% penalty. Documentation is protection.
Required records
- AI systems registry (equivalent to GDPR RoPA)
- Technical specifications of marking implementations (logs, test reports)
- Vendor contracts with marking / anti-tamper clauses
- AI literacy training records (Art. 4 — applicable since 2 Feb 2025)
- Internal SOP for identifying deepfakes in user-generated content
- Incident response plan for disclosure failures
Monitor continuously.
- Run this site's free audit monthly
- Cross-check with AI Act Service Desk
- Verify C2PA integrity on published assets quarterly
- Refresh staff AI literacy training annually
- Track Code of Practice updates (final version expected June 2026)
Pre-D-Day readiness check.
- Every chatbot has AI disclosure at first interaction
- Every AI-generated image/video/audio has C2PA + visible label
- No emotion recognition in workplace or schools
- Every deepfake is labeled with modality-specific method
- Every public-interest AI article has editorial review flag or disclosure
- AI systems registry complete and reviewed
- Staff completed AI literacy training
- Vendor contracts updated with marking clauses
- Privacy policy references GDPR and AI Act
- Incident response plan tested
What a deep audit covers.
Our paid compliance engagement goes beyond client-side signals:
- Backend AI pipeline review — every API call, every model, every output path
- Vendor contract audit — marking clauses, anti-tamper, liability flow
- Workflow mapping — from prompt to published output
- Technical implementation of C2PA + invisible watermark + visible label stack
- Integration with your existing CMS, DAM, or social publishing tools
- Staff training delivery + completion tracking
- Ongoing monitoring dashboard