EU AI Act Article 50: what developers need to know about content provenance
The EU AI Act's transparency obligations for AI-generated content are enforceable starting August 2026. Here's a plain-language guide to what Article 50 actually says, what it means for your codebase, and how to implement it.
This post is written for engineering teams, not lawyers. It's not legal advice. For compliance decisions on actual products, consult counsel familiar with EU AI Act implementation.
What Article 50 actually says
Regulation (EU) 2024/1689 — commonly called the EU AI Act — creates a tiered framework for AI systems based on risk level. Article 50 sits in the transparency section and imposes three main obligations on providers of AI systems:
- Article 50(1): AI systems that interact with natural persons must be designed so that users know they're interacting with AI (chatbot disclosure).
- Article 50(2): Providers of AI systems that generate synthetic content (text, images, audio, video) must mark outputs as artificially generated in a machine-readable format and ensure they're detectable as such.
- Article 50(4): Deployers using AI to generate or manipulate "deepfake" content (imagery, audio, video depicting real people) must disclose that the content is artificially generated or manipulated.
For developers building products that use AI to generate content, Article 50(2) is the one that matters most. You're the provider of an AI system whose outputs fall under this clause.
When enforcement starts
The AI Act was published in the EU Official Journal on July 12, 2024 and entered into force August 1, 2024. Different provisions phase in on different timelines:
- February 2025: Prohibitions on certain AI practices (social scoring, untargeted facial recognition databases, etc.).
- August 2025: General-purpose AI model obligations (transparency, documentation).
- August 2026: Article 50 transparency obligations for AI-generated content. This is the one developers need to prepare for.
- August 2027: High-risk AI system obligations.
You have until August 2026 to get your product compliant on content provenance. That's less than 4 months from today.
What "machine-readable marking" actually means
Article 50(2) requires outputs to be marked as artificially generated "in a machine-readable format and detectable as artificially generated or manipulated." The regulation doesn't specify an exact technical standard, but it's clear that simple metadata tags or visual watermarks aren't enough on their own — they must be detectable, meaning independently verifiable.
In practice, this means one of:
- C2PA manifests (for images and video) — cryptographically signed content credentials embedded in the asset. Adobe, Microsoft, Sony, Nikon support this natively.
- JWS or similar cryptographic signatures (for text) — signed metadata that identifies the output as AI-generated, with an independent verifier.
- Proprietary watermarking with public verification APIs— some providers (OpenAI, Google) have announced this, but coverage is limited to their own outputs.
What doesn't count: database flags in your own system, visible "AI generated" banners, or metadata that only you can verify. The regulation specifies independent detectability.
Does this apply to my product?
Article 50 applies to providers and deployers of AI systems, where the output reaches the EU market or EU users, regardless of where the provider is based. US companies aren't exempt just because they're US-based. The question is whether your product serves EU users.
If you're building:
- A SaaS product that uses Claude, GPT, or any AI model to generate content for customers — you're a deployer, possibly a provider depending on how you wrap the model.
- A content platform where users can generate AI content — you're a deployer of the integrated AI system.
- A consumer app with AI features — same analysis.
- An API that others build on top of — you might be a provider of a general-purpose AI model, which has separate obligations beyond Article 50.
If your user base includes any EU users or EU businesses, Article 50 likely applies. The safe assumption: if you're shipping AI-generated content and you serve a global audience, comply.
What happens if you don't comply
Article 99 sets penalties for non-compliance with transparency obligations at up to €15 million or 3% of global annual turnover, whichever is higher. These fines are the EU's second-tier enforcement (prohibited practices are first-tier at €35M or 7%). Enforcement authority sits with designated national authorities in each member state, coordinated through the European AI Board.
For most startups, the actual risk isn't a €15M fine on day one. It's:
- Market access — EU customers increasingly asking for compliance proof as a purchasing requirement.
- Partner/platform obligations — if you integrate with Shopify, Google Cloud, or similar marketplaces, they're passing their compliance obligations down to you.
- Reputation — the first wave of publicized non-compliance findings will make news.
Implementation options
Three paths to compliance for content-generating products:
Option 1: Use your AI provider's native tools (if they exist)
OpenAI has started embedding C2PA credentials in DALL-E 3 outputs. Google is working on SynthID for images and audio. Anthropic has talked about provenance but hasn't shipped a public API yet. Coverage is partial, varies by provider, and locks you to one vendor's approach.
Option 2: Build your own signing infrastructure
JWS signing is standardized. RFC 3161 timestamp authorities are free (FreeTSA). C2PA reference implementations exist. You can build this yourself in a few engineering weeks. But you're taking on maintenance of a cryptographic system — key rotation, timestamp authority monitoring, C2PA spec updates, compliance attestation.
Option 3: Use a provenance API (CertNode or similar)
Third-party services handle the signing + timestamp + verification infrastructure. Your code stays simple (one SDK call per AI generation). You get three-layer timestamps (CertNode + RFC 3161 + Bitcoin anchor) and public verification without building any of it.
CertNode's pitch is specifically this: provenance as developer infrastructure. 5-line integration, platform-agnostic (works with Claude, OpenAI, Mistral, any provider), pay-as-you-go, 100/mo free to evaluate.
What to do now
- Audit your product: where does AI-generated content flow into user-facing outputs? Every one of those is a potential compliance point.
- Assess EU exposure: how many of your users are in the EU? Even a small percentage triggers obligations.
- Pick an implementation path — native tools, roll your own, or third-party. For most teams, third-party is faster and cheaper than building.
- Integrate + test well before August 2026. The enforcement phase-in is months, not years, for most providers.
- Document your compliance process. If you get audited, you need to show that provenance is built into every content generation, not bolted on as an afterthought.
How CertNode helps
CertNode generates cryptographic receipts for every AI output you sign through our API. Every receipt includes:
- Machine-readable metadata: provider, model, timestamp, content hash
- Independently verifiable signature (ES256 JWS, public JWKS)
- RFC 3161 timestamp from a third-party Time Stamping Authority
- Optional Bitcoin anchor for long-term verification integrity
Verifiers (regulators, customers, auditors) can independently verify at certnode.io/verify/[receiptId]— no account required. This satisfies the "detectable as artificially generated" requirement of Article 50(2).
100 signings/month free. Then $0.01/signing with volume discounts. Get an API key →
This post reflects the author's reading of the EU AI Act as published. Regulatory guidance is still evolving — the European AI Board and national authorities will issue implementation details through 2026. Consult qualified counsel for compliance decisions on actual products.
Published April 24, 2026.