Reg (EU) 2024/1689Generate dossier — €249
LIVE — Fines tracker · Obligations calendar · Transposition status — Updated weekly from EUR-Lex, Safety Gate, OEIL and 12 official sourcesView regulatory intelligence →

AI Act chatbots and virtual assistants: the Article 50 disclosure obligations and what changes when the assistant operates in an Annex III high-risk context.

Chatbots and virtual assistants sit at the intersection of two AI Act regimes. Article 50 — applicable to every chatbot interacting with a natural person regardless of risk class — requires the provider to design the system so users are informed they are interacting with AI, and the provider of generative chatbots to mark synthetic outputs as machine-readable. Where the chatbot is also used in an Annex III domain — for example, an HR assistant filtering candidates (4(a)), a banking assistant making credit-related decisions (5(b)), an education assistant assessing learning (3(b)) — the system additionally becomes a high-risk AI system, triggering the full Chapter III, Section 2 requirements. Both layers apply from 2 August 2026 under Art. 113.

Generate AI Act dossier — €249Free: check your AI system risk

€249 one-time payment · 12 PDF documents in ZIP · 45 minutes · 100% in your browser

Regulation (EU) 2024/1689 · Article 11 + Annex IV · 12 documents · 100% browser-side — your data never leaves your machine

The numbers

Art. 50(1)
Provider designs the system so users know it's AI. Exemption: "obvious from context" to a reasonably observant person.
Art. 50(2)
Provider of generative chatbot marks synthetic content (audio/image/video/text) as machine-readable.
Annex III
Annex III chatbot use cases (HR 4(a), credit 5(b), education 3(b)) trigger high-risk obligations on top.

When chatbot obligations apply, and how the layers stack

Three distinct regimes can apply to the same chatbot. Map them in this order: the disclosure obligation always applies; synthetic content marking applies when the chatbot generates content; high-risk obligations apply when the use case sits in Annex III.

1
Layer 1 — Art. 50(1) disclosure (provider)
Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that natural persons are informed they are interacting with an AI system. Exemption: where it is obvious from the point of view of a reasonably well-informed, observant and circumspect natural person, taking into account circumstances and context of use. Does not apply to systems authorised by law for criminal-offence detection unless available to the public to report a criminal offence.
2
Layer 2 — Art. 50(2) synthetic content (provider)
If your chatbot generates synthetic audio, image, video or text content — including via a general-purpose AI system — the provider shall ensure the outputs are marked in a machine-readable format and detectable as artificially generated or manipulated. Exemptions: assistive function for standard editing or output not substantially altering input/semantics. The marking technology must be effective, interoperable, robust and reliable.
3
Layer 3 — Annex III high-risk classification
Chatbots used in Annex III domains are high-risk. Common cases: 3(a)/(b) education admission and learning evaluation chatbots; 4(a) recruitment screening assistants; 5(a) public-benefit eligibility evaluation; 5(b) credit chatbots that materially influence credit scoring; 5(d) emergency triage assistants. The Art. 6(3) derogation may apply if the chatbot performs a narrow procedural task and does not perform profiling — but profiling is common in customer-facing assistants and removes the derogation.
High-risk chatbots — full Chapter III, Section 2 load
If the chatbot is high-risk: Art. 9 risk management; Art. 10 data governance; Art. 11 + Annex IV technical documentation; Art. 12 logs; Art. 13 transparency to deployers; Art. 14 human oversight; Art. 15 accuracy/robustness/cybersecurity; Art. 16 provider obligations; Art. 17 QMS; Art. 18 10-year documentation retention; Art. 26 deployer obligations including Art. 26(11) inform decision subjects; Art. 43 conformity assessment under Annex VI internal control; Art. 47 EU DoC; Art. 49 registration; Art. 50 transparency on top.
Art. 26(11) — deployer notice for Annex III decisions
Without prejudice to Art. 50, deployers of high-risk AI systems referred to in Annex III that make decisions or assist in making decisions related to natural persons shall inform the natural persons that they are subject to the use of the high-risk AI system. For law-enforcement chatbots, Art. 13 of Directive (EU) 2016/680 applies.

Three common mistakes

COMMON MISTAKE

"Our customer-service chatbot is obviously AI, so Art. 50(1) doesn't apply"

The Art. 50(1) exemption applies where the AI nature is "obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use". A modern customer-service assistant — branded with a human-sounding name, capable of natural conversation, integrated into a help interface — does not necessarily satisfy that standard. Conservative reading is to disclose at first interaction (Art. 50(5)).

COMMON MISTAKE

"Voice assistants don't generate 'content', so Art. 50(2) doesn't apply"

Art. 50(2) covers AI systems generating synthetic audio. A voice assistant that synthesises speech generates synthetic audio output. The provider must ensure the output is marked machine-readable as artificially generated. The Art. 50(2) exemption for assistive functions applies to standard editing or non-substantive alteration — not to fundamental voice synthesis.

COMMON MISTAKE

"An HR chatbot only triggers Art. 50 — it is not high-risk"

Annex III 4(a) lists "AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates". An HR chatbot that filters or evaluates candidates falls under 4(a) and is high-risk under Art. 6(2) — triggering Art. 11 + Annex IV documentation, Art. 17 QMS, Art. 43 conformity assessment, Art. 26 deployer obligations, plus Art. 50 transparency on top.

Does the AI Act apply to your system?

Answer these four questions to determine your obligations.

Does your system use machine learning, logic-based, or statistical approaches?
Art. 3(1) — definition of "AI system"
Is the system placed on the EU market or does its output affect persons in the EU?
Art. 2(1) — territorial scope (extraterritorial via 2(1)(c))
Is your system used in any Annex III domain? (employment, credit, education, law enforcement, migration, justice, critical infrastructure, biometrics)
Art. 6(2) + Annex III — high-risk classification
Are you the provider (developer) or the deployer (user) of the system?
Art. 3(3) provider · Art. 3(4) deployer — different obligations

Take the full AI Act risk classification test →

What the ZIP contains

12 PDF documents generated from your inputs. Each cites the article of Regulation (EU) 2024/1689 it fulfils.

1

Risk Classification Report

Identifies whether your system is prohibited (Art. 5), high-risk (Art. 6 + Annex III) or subject to transparency obligations (Art. 50).

2

Technical Documentation

The 9 blocks of Annex IV in full: system description, training data, validation, performance metrics, risk management, human oversight. Art. 11 + Annex IV.

3

EU Declaration of Conformity

Signable document conforming to Art. 47 and Annex V.

4

Compliance Calendar

Key application dates: 2 Feb 2025, 2 Aug 2025, 2 Aug 2026, 2 Aug 2027. Art. 113.

5

Conformity Sheet

Executive summary of compliance status for authorities or commercial buyers. Art. 43 procedure.

6

Quality Management System (QMS)

QMS structure covering the 13 aspects required by Art. 17.

7

Deployer Instructions

Document for the entity deploying your system, conforming to Art. 13.

8

Evidence Checklist

Verifiable evidence list, cross-referenced to every Annex IV block.

9

Incident Report Template

Notification protocol conforming to Art. 73 (15 days general / 10 days death / 2 days widespread).

10

AI Literacy Programme

Training plan conforming to Art. 4, in force since 2 February 2025.

11

Post-Market Monitoring Plan

Plan structure required by Art. 72 and integrated into the technical documentation under Annex IV(9).

12

Fundamental Rights Impact Assessment (FRIA)

Template under Art. 27 for public bodies, private entities providing public services, and Annex III 5(b)(c) deployers.

See before you buy — Download a sample dossier (PDF, fictional company) — Real structure, real articles, real format. Fictional data.

Generated from your inputs, in your browser. No data leaves your machine.

What you pay

🧾 AI ACT COMPLIANCE CONSULTANCY
€5,000–€15,000
3–6 months. They explain the obligations to you.
✓ AICHECK
€249
12 documents. 45 minutes. Solves the documentation.

Technical documentation and conformity assessment: two layers

● LAYER 1

Technical documentation — Annex IV

12 documents. 45 minutes. €249. The documentation your system needs before being placed on the market.

∅ LAYER 2

Conformity assessment by notified body

If your system falls under Art. 43(1) (Annex III point 1 biometrics with notified-body route, or Annex I products), you will need third-party conformity assessment. That is a separate process. AICheck does not replace it.

We do not sell audits. We do not sell consultancy. We sell the tool that structures your documentation under Annex IV.

Penalty regime

Article 99 of Regulation (EU) 2024/1689. Chapter XII (Penalties) applies from 2 August 2025.

🇪🇺
Non-compliance with prohibited practices (Art. 5)
€35M / 7%

Art. 99(3). Up to €35 million or 7% of total worldwide annual turnover, whichever is higher. For SMEs and start-ups: whichever is lower (Art. 99(6)).

🇪🇺
Non-compliance with operator obligations (high-risk, transparency, deployer)
€15M / 3%

Art. 99(4). Includes failure to draw up technical documentation under Art. 11 + Annex IV. Covers obligations of providers (Art. 16), deployers (Art. 26), authorised representatives (Art. 22), importers (Art. 23), distributors (Art. 24), notified bodies (Art. 31, 33, 34) and transparency under Art. 50.

🇪🇺
Supply of incorrect, incomplete or misleading information
€7.5M / 1%

Art. 99(5). Applies when information provided to notified bodies or national competent authorities is wrong or misleading.

Documenting 5 or more AI systems?

If you operate multiple AI systems and need to document them all under Annex IV, contact us for volume pricing at hello@solidwaretools.com.

Request volume pricing
Reply within one business day

What AICheck guarantees, and what it does not

AICheck produces a document structured under Article 11 and Annex IV of Regulation (EU) 2024/1689 from the information you provide. The accuracy, truthfulness and completeness of that information is your responsibility as provider of the AI system.

We guarantee that the document structure follows Article 11 and Annex IV of Regulation (EU) 2024/1689 and that the legal references cited are correct as of the last verification date. We do not guarantee that a specific document will be accepted by a market surveillance authority in a given case, nor by a commercial buyer in a procurement process.

AICheck is not legal advice. For specific situations, consult a lawyer or specialised regulatory consultancy.

Frequently asked questions

Does the AI Act apply to a basic customer-service chatbot?
Yes. Even a low-risk customer-service chatbot triggers: (i) Art. 4 AI literacy for staff and operators (in force since 2 February 2025); (ii) Art. 50(1) disclosure to the user that they are interacting with AI; (iii) Art. 50(2) machine-readable marking if the chatbot generates synthetic content (e.g., voice synthesis). It does not by itself trigger Annex III high-risk obligations — only the specific use case does.
What about voice assistants — speech recognition and synthesis?
Speech recognition / synthesis is in scope of the AI system definition (Art. 3(1)). Art. 50(1) applies if the assistant interacts directly with natural persons. Art. 50(2) applies to synthetic audio output (voice synthesis) — outputs must be machine-readable as AI-generated. Speech-to-text is unlikely to fall under Annex III 1 biometrics unless used for biometric identification; emotion recognition in workplace/education context is prohibited under Art. 5(1)(f).
Do we have to keep transcripts of chatbot interactions?
The AI Act itself does not impose a transcript retention duty for low-risk chatbots. For high-risk chatbots, Art. 12 (record-keeping by automatic event logging) and Art. 26(6) (deployer log retention of at least 6 months) apply. Separately, the GDPR may require retention or deletion under principles of purpose limitation, data minimisation and storage limitation — depending on what personal data is processed. The two regimes apply in parallel.
Is this a subscription?
No. One-time payment. The licence includes 30 days of editing and 10 regenerations. The PDF you download is yours to keep.
Can I request a refund?
Pursuant to Article 16(m) of Directive (EU) 2011/83 on consumer rights, by activating the licence you give express consent to the immediate generation of digital content, waiving the 14-day withdrawal right. Refunds are only accepted in the case of a reproducible technical failure.
What if the regulation changes?
If the regulation changes while your licence is active, you can regenerate the document with the updated version of the generator at no additional cost.
⚠️ Important notice: AICheck is a documentary self-assessment tool, not legal advice nor a third-party audit. The document under Article 11 and Annex IV of Regulation (EU) 2024/1689 is generated from the data you input. The accuracy of that data is your responsibility. AICheck does not replace a qualified professional assessment.

Don't wait for the consultancy. Generate the Annex IV documentation for your AI system in your browser in 45 minutes.

Twelve documents. Annex IV fully structured. Regulation (EU) 2024/1689. Your data does not leave your machine. The ZIP you download is yours to keep.

€249 one-time payment
12 professional documents · 45 minutes · No subscription · 100% in your browser
Generate dossier — €249
✓ Last regulatory verification: 11 May 2026 · No substantive changes detected · View history