Reg (EU) 2024/1689Generate dossier — €249
LIVE — Fines tracker · Obligations calendar · Transposition status — Updated weekly from EUR-Lex, Safety Gate, OEIL and 12 official sourcesView regulatory intelligence →

AI Act Article 50: four transparency obligations that apply regardless of high-risk classification, in force from 2 August 2026.

Article 50 of Regulation (EU) 2024/1689 introduces transparency obligations that apply to specific categories of AI systems — regardless of whether the system is also classified as high-risk under Annex III. The obligations sit in Chapter IV and apply from 2 August 2026 under the general application date in Art. 113. Four distinct cases: (1) AI systems interacting directly with natural persons must disclose their AI nature; (2) providers of generative AI must mark synthetic content as machine-readable; (3) deployers of emotion-recognition or biometric-categorisation systems must inform exposed persons; (4) deployers of deep-fake systems must disclose the artificial nature of the content.

Generate AI Act dossier — €249Free: check your AI system risk

€249 one-time payment · 12 PDF documents in ZIP · 45 minutes · 100% in your browser

Regulation (EU) 2024/1689 · Article 11 + Annex IV · 12 documents · 100% browser-side — your data never leaves your machine

The numbers

4 cases
Art. 50(1) chatbots · Art. 50(2) synthetic content · Art. 50(3) emotion/biometric categorisation · Art. 50(4) deep fakes.
2 Aug 2026
Application date. Chapter IV applies from the general application date in Art. 113, second paragraph.
€15M / 3%
Art. 99(4)(g). Fines for breach of Art. 50 transparency obligations. Lower of two amounts for SMEs (Art. 99(6)).

The four Article 50 cases, with provider/deployer allocation

Each Article 50 paragraph has its own trigger, its own responsible party (provider or deployer) and its own exemption. Read carefully — "obvious from context" exemptions are narrow and don't cover most commercial use cases.

1
Art. 50(1) — AI systems interacting with natural persons (provider)
Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that natural persons are informed they are interacting with an AI system. Exemption: where it is obvious from the point of view of a reasonably well-informed, observant and circumspect natural person, taking into account circumstances and context of use. Does not apply to systems authorised by law for criminal-offence detection (with safeguards) unless the system is available for the public to report a criminal offence.
2
Art. 50(2) — Synthetic content (provider)
Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Technical solutions must be effective, interoperable, robust and reliable. Exemptions: AI systems performing an assistive function for standard editing or not substantially altering the input or its semantics; or where authorised by law for criminal-offence detection.
3
Art. 50(3) — Emotion recognition or biometric categorisation (deployer)
Deployers of an emotion-recognition system or a biometric-categorisation system shall inform the natural persons exposed to it of the operation of the system, and shall process personal data in accordance with Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 as applicable. Exemption: systems permitted by law to detect, prevent or investigate criminal offences (with safeguards). Remember: emotion recognition in workplace and education is prohibited under Art. 5(1)(f) except for medical or safety reasons.
4
Art. 50(4) — Deep fakes (deployer)
Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake shall disclose that the content has been artificially generated or manipulated. Where the content is part of an evidently artistic, creative, satirical, fictional or analogous work or programme, transparency is limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the work. Second subparagraph: deployers of AI generating or manipulating text published to inform the public on matters of public interest shall disclose, unless authorised by law or where human review with editorial responsibility applies.
Art. 50(5) — Timing and accessibility
The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to applicable accessibility requirements.

Three common mistakes

COMMON MISTAKE

"If we are not in Annex III, Art. 50 doesn't apply to us"

Article 50 is in Chapter IV and is independent of Annex III high-risk classification. A chatbot that is not high-risk under Annex III still triggers Art. 50(1). A consumer image-generation tool that is not high-risk still triggers Art. 50(2) synthetic-content marking. A wellbeing app using emotion recognition outside workplace/education still triggers Art. 50(3) deployer-side notice.

COMMON MISTAKE

"Deep fake means manipulated video only"

Art. 3(60) defines deep fake as "AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful". The definition covers audio (voice cloning), image (face swap, photo manipulation) and video. Art. 50(4) second subparagraph extends a similar disclosure obligation to AI-generated text published on matters of public interest, with carve-outs for editorial review and law-enforcement authorisation.

COMMON MISTAKE

"Our chatbot is obviously a chatbot, so Art. 50(1) doesn't apply"

The exemption applies where the AI nature is obvious to a "reasonably well-informed, observant and circumspect" natural person, taking into account the circumstances and context of use. Standard for the exemption is narrow: a website visitor encountering a voice or messaging assistant that sounds human, even if labelled with a name, is not necessarily on notice. Conservative reading is to disclose at the first interaction (Art. 50(5)).

Does the AI Act apply to your system?

Answer these four questions to determine your obligations.

Does your system use machine learning, logic-based, or statistical approaches?
Art. 3(1) — definition of "AI system"
Is the system placed on the EU market or does its output affect persons in the EU?
Art. 2(1) — territorial scope (extraterritorial via 2(1)(c))
Is your system used in any Annex III domain? (employment, credit, education, law enforcement, migration, justice, critical infrastructure, biometrics)
Art. 6(2) + Annex III — high-risk classification
Are you the provider (developer) or the deployer (user) of the system?
Art. 3(3) provider · Art. 3(4) deployer — different obligations

Take the full AI Act risk classification test →

What the ZIP contains

12 PDF documents generated from your inputs. Each cites the article of Regulation (EU) 2024/1689 it fulfils.

1

Risk Classification Report

Identifies whether your system is prohibited (Art. 5), high-risk (Art. 6 + Annex III) or subject to transparency obligations (Art. 50).

2

Technical Documentation

The 9 blocks of Annex IV in full: system description, training data, validation, performance metrics, risk management, human oversight. Art. 11 + Annex IV.

3

EU Declaration of Conformity

Signable document conforming to Art. 47 and Annex V.

4

Compliance Calendar

Key application dates: 2 Feb 2025, 2 Aug 2025, 2 Aug 2026, 2 Aug 2027. Art. 113.

5

Conformity Sheet

Executive summary of compliance status for authorities or commercial buyers. Art. 43 procedure.

6

Quality Management System (QMS)

QMS structure covering the 13 aspects required by Art. 17.

7

Deployer Instructions

Document for the entity deploying your system, conforming to Art. 13.

8

Evidence Checklist

Verifiable evidence list, cross-referenced to every Annex IV block.

9

Incident Report Template

Notification protocol conforming to Art. 73 (15 days general / 10 days death / 2 days widespread).

10

AI Literacy Programme

Training plan conforming to Art. 4, in force since 2 February 2025.

11

Post-Market Monitoring Plan

Plan structure required by Art. 72 and integrated into the technical documentation under Annex IV(9).

12

Fundamental Rights Impact Assessment (FRIA)

Template under Art. 27 for public bodies, private entities providing public services, and Annex III 5(b)(c) deployers.

See before you buy — Download a sample dossier (PDF, fictional company) — Real structure, real articles, real format. Fictional data.

Generated from your inputs, in your browser. No data leaves your machine.

What you pay

🧾 AI ACT COMPLIANCE CONSULTANCY
€5,000–€15,000
3–6 months. They explain the obligations to you.
✓ AICHECK
€249
12 documents. 45 minutes. Solves the documentation.

Technical documentation and conformity assessment: two layers

● LAYER 1

Technical documentation — Annex IV

12 documents. 45 minutes. €249. The documentation your system needs before being placed on the market.

∅ LAYER 2

Conformity assessment by notified body

If your system falls under Art. 43(1) (Annex III point 1 biometrics with notified-body route, or Annex I products), you will need third-party conformity assessment. That is a separate process. AICheck does not replace it.

We do not sell audits. We do not sell consultancy. We sell the tool that structures your documentation under Annex IV.

Penalty regime

Article 99 of Regulation (EU) 2024/1689. Chapter XII (Penalties) applies from 2 August 2025.

🇪🇺
Non-compliance with prohibited practices (Art. 5)
€35M / 7%

Art. 99(3). Up to €35 million or 7% of total worldwide annual turnover, whichever is higher. For SMEs and start-ups: whichever is lower (Art. 99(6)).

🇪🇺
Non-compliance with operator obligations (high-risk, transparency, deployer)
€15M / 3%

Art. 99(4). Includes failure to draw up technical documentation under Art. 11 + Annex IV. Covers obligations of providers (Art. 16), deployers (Art. 26), authorised representatives (Art. 22), importers (Art. 23), distributors (Art. 24), notified bodies (Art. 31, 33, 34) and transparency under Art. 50.

🇪🇺
Supply of incorrect, incomplete or misleading information
€7.5M / 1%

Art. 99(5). Applies when information provided to notified bodies or national competent authorities is wrong or misleading.

Documenting 5 or more AI systems?

If you operate multiple AI systems and need to document them all under Annex IV, contact us for volume pricing at hello@solidwaretools.com.

Request volume pricing
Reply within one business day

What AICheck guarantees, and what it does not

AICheck produces a document structured under Article 11 and Annex IV of Regulation (EU) 2024/1689 from the information you provide. The accuracy, truthfulness and completeness of that information is your responsibility as provider of the AI system.

We guarantee that the document structure follows Article 11 and Annex IV of Regulation (EU) 2024/1689 and that the legal references cited are correct as of the last verification date. We do not guarantee that a specific document will be accepted by a market surveillance authority in a given case, nor by a commercial buyer in a procurement process.

AICheck is not legal advice. For specific situations, consult a lawyer or specialised regulatory consultancy.

Frequently asked questions

What counts as "synthetic content" under Art. 50(2)?
Audio, image, video or text content generated by an AI system, including a general-purpose AI system. The provider — not the deployer — must ensure outputs are marked in a machine-readable format. The technical solutions must be effective, interoperable, robust and reliable, taking into account the state of the art. Exemptions: AI performing an assistive function for standard editing or not substantially altering input/semantics.
Who has the Art. 50 obligation, provider or deployer?
Paragraphs 1 and 2 are provider obligations: design the system so it informs users (chatbot disclosure) and ensure outputs are machine-readable (synthetic marking). Paragraphs 3 and 4 are deployer obligations: inform persons exposed to emotion recognition or biometric categorisation (50.3), and disclose deep fakes (50.4). Article 50 generally splits transparency responsibility between providers and deployers.
What does "machine-readable format" mean for synthetic content?
Art. 50(2) does not specify a single technology but requires the marking to be machine-readable and detectable as artificially generated or manipulated. The Commission may adopt implementing acts approving codes of practice under Art. 56(6), or specify common rules under Art. 98(2). Examples in industry practice: cryptographic content provenance (e.g., C2PA), watermarking schemes, signed metadata. The chosen technical solution must be effective, interoperable, robust and reliable for the type of content.
Is this a subscription?
No. One-time payment. The licence includes 30 days of editing and 10 regenerations. The PDF you download is yours to keep.
Can I request a refund?
Pursuant to Article 16(m) of Directive (EU) 2011/83 on consumer rights, by activating the licence you give express consent to the immediate generation of digital content, waiving the 14-day withdrawal right. Refunds are only accepted in the case of a reproducible technical failure.
What if the regulation changes?
If the regulation changes while your licence is active, you can regenerate the document with the updated version of the generator at no additional cost.
⚠️ Important notice: AICheck is a documentary self-assessment tool, not legal advice nor a third-party audit. The document under Article 11 and Annex IV of Regulation (EU) 2024/1689 is generated from the data you input. The accuracy of that data is your responsibility. AICheck does not replace a qualified professional assessment.

Don't wait for the consultancy. Generate the Annex IV documentation for your AI system in your browser in 45 minutes.

Twelve documents. Annex IV fully structured. Regulation (EU) 2024/1689. Your data does not leave your machine. The ZIP you download is yours to keep.

€249 one-time payment
12 professional documents · 45 minutes · No subscription · 100% in your browser
Generate dossier — €249
✓ Last regulatory verification: 11 May 2026 · No substantive changes detected · View history