Skip to main content
← All resources

AI detection is broken. AI provenance is the answer.

Detection tries to guess whether content was AI-generated after the fact. Provenance proves it at creation time with cryptographic signatures. One of these approaches is losing. The other is winning.

The detection problem

OpenAI retired their AI classifier in 2023. Their own tool flagged AI-generated text as human-written too often for the product to be useful. Turnitin's AI detector has flagged published pre-ChatGPT academic writing as AI-generated. GPTZero's accuracy on modern models hovers near a coin flip.

The fundamental issue: detection is inference. It looks at writing patterns (perplexity, burstiness, sentence structure) and makes a probabilistic guess. Every new model release — Claude 4.6, 4.7, GPT-5, Mistral Large 3 — produces writing that's closer to human distribution, and every generation makes the detectors worse.

This is a losing race. The tools get worse every month. Institutions that rely on detection (universities, publishers, hiring platforms) face a choice: accept false positives flagging legitimate human writing as AI, accept false negatives missing actual AI content, or stop using detection entirely.

What provenance does differently

Provenance is the opposite approach. Instead of trying to guess whether content was AI-generated after the fact, sign it at creation time with a cryptographic receipt. Three layers:

  1. Digital signature (JWS / ES256) over the content's hash plus metadata (model, provider, timestamp).
  2. RFC 3161 timestamp from an independent Time Stamping Authority — the same standard used for legally-signed documents since 2001.
  3. Bitcoin anchor via OpenTimestamps — the hash is committed to a Bitcoin block within an hour or two, providing a third independent witness.

The result is a receipt that says: "this content existed at this exact time, generated by this model, and hasn't been modified since." Anyone can verify the receipt. Three independent timestamp authorities would all have to be compromised simultaneously to fake it.

Why provenance beats detection for real use cases

Litigation and compliance: Federal Rule of Evidence 902(13)/(14) accepts self-authenticating digital records with certified processes. RFC 3161 timestamps qualify. Statistical inference from a detection tool does not.

EU AI Act compliance: Article 50 (enforceable August 2026) requires providers of AI-generating systems to label their outputs. The regulation doesn't say "use a detection tool" — it says use provenance metadata.

Customer disputes: when a customer claims your AI-generated content was plagiarized, or was actually written by a human, or didn't exist at the time you claimed — you need proof, not probability. A verifiable signature is proof. "Our detection tool said 87% likely AI" is not.

The limits of provenance

Provenance isn't magic. It only works for content that was signed at creation. That means:

  • It doesn't help you detect unsigned AI content circulating in the wild. That's still detection's problem (and still broken).
  • It only certifies what the signer says — you sign what you generated. You can lie about which model you used, or doctor the output before signing.
  • It requires adoption by content generators. In the short term, less than 1% of AI content will carry provenance signatures. Adoption compounds over time, especially under regulatory pressure.

But provenance covers the cases that actually matter: content you generate, where you need to prove its origin later. Detection was never going to solve that problem. Provenance does.

Why we built CertNode for this

Most content authenticity tools in the market are enterprise-sales gated. Adobe's Content Authenticity Initiative requires membership. Truepic sells to insurance companies. Numbers Protocol is crypto-native and creator-focused.

None of them serve the developer building a product that generates AI content and wants to add cryptographic provenance in an afternoon. That's the CertNode pitch: 5-line SDK integration, transparent PAYG pricing, MCP-native for Claude users, works platform-agnostic across OpenAI, Mistral, or anything else.

100 signings per month free. Then $0.01 per signing, with volume discounts that drop automatically as you grow. Get started →

Published April 24, 2026. Updated as the provenance standard evolves.