Art. 50 — clarified.
Sixteen questions that come up in every compliance engagement. Direct answers with source-grade references.
When does Article 50 become enforceable?
Article 50 of Regulation (EU) 2024/1689 becomes directly applicable in all EU member states on 2 August 2026. From that date, national market surveillance authorities can initiate enforcement actions, including administrative fines.
What is the maximum fine for Article 50 non-compliance?
Up to €15 million or 3% of global annual turnover, whichever is higher (Art. 99(4)(g)). This is Tier 2 of the AI Act's penalty structure. The €35M / 7% figure often cited in marketing material applies to Tier 1 (prohibited practices under Art. 5), not Article 50.
Does the AI Act apply to companies outside the EU?
Yes. The AI Act applies to any organization that places AI systems on the EU market or uses AI affecting EU residents, regardless of where the organization is headquartered. Extraterritorial reach is similar to GDPR.
Am I a Provider or a Deployer?
You can be both. If you integrate third-party AI models via API into a user-facing product, you are a GPAI System Provider — liable for marking outputs under Art. 50(2). If you also publish the resulting content, you are also a Deployer — liable for disclosure under Art. 50(1), (3), or (4).
Do I need to label every AI-generated image?
Technically yes, under Art. 50(2). The draft Code of Practice specifies multi-layer marking: machine-readable metadata (e.g., C2PA), invisible watermark, and — for deepfakes or public-interest content — a perceptible visible label.
What counts as a deepfake?
The AI Act defines deepfakes as AI-generated or manipulated image, audio, or video content that resembles existing persons, objects, places, entities, or events and would falsely appear authentic to a person. A clearly cartoon AI avatar is not a deepfake; a photorealistic face swap is.
Is the Code of Practice mandatory?
No, the Code of Practice on Transparency of AI-Generated Content is voluntary. But organizations not aligning with it will need to demonstrate — with independently verified benchmarks — that their solutions achieve equivalent performance on the four Art. 50(2) criteria. In practice, alignment is the path of least resistance.
Who enforces the AI Act in Romania?
On 12 March 2026, ANCOM was designated as Romania's market surveillance authority and single point of contact for the AI Act. ANCOM coordinates with CNA for audiovisual content, DNSC for cybersecurity, ANSPDCP for data protection, and ADR for digitalisation strategy.
If my site is just a brochure, am I in scope?
If your brochure was generated with AI and you're publishing it to EU audiences, yes — Art. 50(2) marking applies. If your brochure site has a chatbot, Art. 50(1) applies. If it hosts a deepfake testimonial, Art. 50(4) applies. Very few sites escape entirely.
What's the difference between Art. 50 and high-risk obligations?
Art. 50 is about transparency — users know when AI is involved. High-risk (Art. 6–49) is about safety and fundamental rights — requiring conformity assessments, quality management systems, human oversight, and EU database registration. The two regimes can overlap for the same system.
Can I rely on OpenAI / Anthropic / Google to comply for me?
No. Those are Model Providers with their own Art. 50(2) obligations at model level. You remain fully liable as a GPAI System Provider for the outputs of your system, and as a Deployer for how those outputs are presented. Contractual flow-down helps, but does not transfer legal liability.
Do the rules apply to internal tools only used by employees?
Art. 50 is oriented toward user-facing systems and published content. Purely internal tools — where only employees interact — have reduced exposure. But watch the Art. 5 prohibition on emotion recognition at the workplace, which does apply to internal use. Also, AI literacy training (Art. 4) applies to all staff using AI regardless of context.
When is a chatbot disclosure obligation triggered?
At the first point of interaction. Not in a ToS deep link. Not after three messages. The user must know from message one that they're interacting with AI, unless obvious for a reasonably well-informed person. Voice clones and hyper-humanized agents lose the 'obvious' exception.
Does editorial review exempt AI-written text?
For Art. 50(4) public-interest texts, yes — if a natural or legal person takes editorial responsibility and has reviewed the content. You must document the review. For Art. 50(2) marking, editorial review does not exempt the machine-readable marking obligation for the generated output.
What about artistic or satirical AI content?
Art. 50(4) provides an exemption: where deepfake content forms part of an evidently artistic, creative, satirical, fictional, or analogous work, the transparency obligation is limited to disclosure of existence in a manner that does not hamper enjoyment. 'Evidently' is load-bearing — hidden satirical intent doesn't qualify.
Will the AI Act change before August 2026?
The Digital Omnibus initiative is under consideration and may adjust certain aspects. The Commission has proposed extending the transition for some high-risk categories due to standard delays. Article 50 core obligations remain on track for 2 August 2026.
Question not answered?
Send us a specific scenario. If it's a useful addition, we add it to the FAQ with credit.
Ask a Question