Reg (EU) 2024/1689Generate dossier — €249
LIVE — Fines tracker · Obligations calendar · Transposition status — Updated weekly from EUR-Lex, Safety Gate, OEIL and 12 official sourcesView regulatory intelligence →

AI Act Article 4: the AI literacy requirement for every provider and deployer, in force across the EU since 2 February 2025.

Article 4 of Regulation (EU) 2024/1689 imposes a direct obligation — separate from the high-risk regime — on every provider and every deployer of an AI system: ensure, to their best extent, a sufficient level of AI literacy of staff and other persons dealing with the operation and use of AI systems on their behalf. The obligation is in Chapter I and has been in force since 2 February 2025 under Art. 113(a). It applies regardless of risk classification, regardless of company size, regardless of whether personal data is processed. The standard is contextual: it must take into account technical knowledge, experience, education, training, and the persons or groups of persons on whom the AI systems are to be used.

Generate AI Act dossier — €249Free: check your AI system risk

€249 one-time payment · 12 PDF documents in ZIP · 45 minutes · 100% in your browser

Regulation (EU) 2024/1689 · Article 11 + Annex IV · 12 documents · 100% browser-side — your data never leaves your machine

The numbers

Art. 4
Chapter I — applies to every provider and deployer of any AI system, regardless of risk classification.
2 Feb 2025
Date of application under Art. 113(a). The Art. 4 obligation has been in force since this date.
All staff
Including 'other persons dealing with the operation and use of AI systems on the provider's or deployer's behalf'.

What 'sufficient AI literacy' means and how to evidence it

Article 4 is short — a single paragraph — but its scope is broad. The Regulation defines AI literacy in Art. 3(56) and Article 4 operationalises it as an obligation on providers and deployers.

The Article 4 text
"Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used."
Definition of AI literacy (Art. 3(56))
Skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations under this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
Who must be trained
"Staff and other persons dealing with the operation and use of AI systems on the provider's or deployer's behalf". This covers employees who interact with the AI in operations, decision-makers who act on AI outputs, and contractors or external operators acting on behalf of the entity. Persons on whom the AI is used (data subjects) are within the definition of AI literacy in Art. 3(56) but the Art. 4 obligation primarily concerns operators, not data subjects.
Standard — proportionate, contextual
The level of literacy is "sufficient" — not absolute — and is proportionate to: (i) the technical knowledge, experience, education and training of the persons; (ii) the context the AI systems are to be used in; (iii) the persons or groups of persons on whom the AI systems are to be used. A compliance officer overseeing a high-risk credit-scoring AI needs deeper literacy than a marketing assistant using a low-risk text-generation tool.
Typical evidence
A documented AI literacy programme tailored to roles; attendance records and training logs; assessment of knowledge before granting access to AI tools; periodic refresh of training when systems or use cases change; integration into onboarding for new staff and contractors; explicit coverage of Art. 5 prohibitions, Art. 50 transparency duties for chatbots and synthetic content, and (where applicable) Art. 26 deployer obligations.
Penalty regime
Article 4 is not in the list of articles enumerated under Art. 99(4) (€15M / 3%). Penalties for Art. 4 breach fall under the general Member State penalty regime under Art. 99(1) — Member States shall lay down rules on penalties that are effective, proportionate and dissuasive, taking into account guidelines issued by the Commission. The applicable penalty therefore varies by Member State.

Three common mistakes

COMMON MISTAKE

"AI literacy only applies to the compliance team"

Art. 4 applies to staff and other persons dealing with the operation and use of AI systems on the provider's or deployer's behalf. That includes the marketing team using a generative tool, the HR team using a screening AI, the customer service team using a chatbot, the finance team using a forecasting model. Each role needs literacy proportionate to its use — but every role using AI is in scope.

COMMON MISTAKE

"Generic IT or data-protection training is enough"

Art. 3(56) defines AI literacy as skills, knowledge and understanding that allow informed deployment of AI systems and awareness of opportunities, risks and possible harm — taking into account rights and obligations under the Regulation. Generic IT or GDPR training does not cover the AI-specific elements: Art. 5 prohibitions, Annex III high-risk categories, Art. 50 transparency, deepfake risks, model limitations and bias, human-oversight expectations under Art. 14.

COMMON MISTAKE

"Art. 4 has no penalty so it is not enforceable"

Art. 99(1) — Member States shall lay down rules on penalties applicable to infringements of the Regulation. While Art. 4 is not in the Art. 99(4) €15M / 3% tier, breaches are sanctionable under the national penalty regime each Member State sets. In addition, failure to train staff is a contributing factor that aggravates other breaches — for example, a serious incident under Art. 73 caused by an untrained operator, or a transparency breach under Art. 50 caused by staff who did not know the obligation existed.

Does the AI Act apply to your system?

Answer these four questions to determine your obligations.

Does your system use machine learning, logic-based, or statistical approaches?
Art. 3(1) — definition of "AI system"
Is the system placed on the EU market or does its output affect persons in the EU?
Art. 2(1) — territorial scope (extraterritorial via 2(1)(c))
Is your system used in any Annex III domain? (employment, credit, education, law enforcement, migration, justice, critical infrastructure, biometrics)
Art. 6(2) + Annex III — high-risk classification
Are you the provider (developer) or the deployer (user) of the system?
Art. 3(3) provider · Art. 3(4) deployer — different obligations

Take the full AI Act risk classification test →

What the ZIP contains

12 PDF documents generated from your inputs. Each cites the article of Regulation (EU) 2024/1689 it fulfils.

1

Risk Classification Report

Identifies whether your system is prohibited (Art. 5), high-risk (Art. 6 + Annex III) or subject to transparency obligations (Art. 50).

2

Technical Documentation

The 9 blocks of Annex IV in full: system description, training data, validation, performance metrics, risk management, human oversight. Art. 11 + Annex IV.

3

EU Declaration of Conformity

Signable document conforming to Art. 47 and Annex V.

4

Compliance Calendar

Key application dates: 2 Feb 2025, 2 Aug 2025, 2 Aug 2026, 2 Aug 2027. Art. 113.

5

Conformity Sheet

Executive summary of compliance status for authorities or commercial buyers. Art. 43 procedure.

6

Quality Management System (QMS)

QMS structure covering the 13 aspects required by Art. 17.

7

Deployer Instructions

Document for the entity deploying your system, conforming to Art. 13.

8

Evidence Checklist

Verifiable evidence list, cross-referenced to every Annex IV block.

9

Incident Report Template

Notification protocol conforming to Art. 73 (15 days general / 10 days death / 2 days widespread).

10

AI Literacy Programme

Training plan conforming to Art. 4, in force since 2 February 2025.

11

Post-Market Monitoring Plan

Plan structure required by Art. 72 and integrated into the technical documentation under Annex IV(9).

12

Fundamental Rights Impact Assessment (FRIA)

Template under Art. 27 for public bodies, private entities providing public services, and Annex III 5(b)(c) deployers.

See before you buy — Download a sample dossier (PDF, fictional company) — Real structure, real articles, real format. Fictional data.

Generated from your inputs, in your browser. No data leaves your machine.

What you pay

🧾 AI ACT COMPLIANCE CONSULTANCY
€5,000–€15,000
3–6 months. They explain the obligations to you.
✓ AICHECK
€249
12 documents. 45 minutes. Solves the documentation.

Technical documentation and conformity assessment: two layers

● LAYER 1

Technical documentation — Annex IV

12 documents. 45 minutes. €249. The documentation your system needs before being placed on the market.

∅ LAYER 2

Conformity assessment by notified body

If your system falls under Art. 43(1) (Annex III point 1 biometrics with notified-body route, or Annex I products), you will need third-party conformity assessment. That is a separate process. AICheck does not replace it.

We do not sell audits. We do not sell consultancy. We sell the tool that structures your documentation under Annex IV.

Penalty regime

Article 99 of Regulation (EU) 2024/1689. Chapter XII (Penalties) applies from 2 August 2025.

🇪🇺
Non-compliance with prohibited practices (Art. 5)
€35M / 7%

Art. 99(3). Up to €35 million or 7% of total worldwide annual turnover, whichever is higher. For SMEs and start-ups: whichever is lower (Art. 99(6)).

🇪🇺
Non-compliance with operator obligations (high-risk, transparency, deployer)
€15M / 3%

Art. 99(4). Includes failure to draw up technical documentation under Art. 11 + Annex IV. Covers obligations of providers (Art. 16), deployers (Art. 26), authorised representatives (Art. 22), importers (Art. 23), distributors (Art. 24), notified bodies (Art. 31, 33, 34) and transparency under Art. 50.

🇪🇺
Supply of incorrect, incomplete or misleading information
€7.5M / 1%

Art. 99(5). Applies when information provided to notified bodies or national competent authorities is wrong or misleading.

Documenting 5 or more AI systems?

If you operate multiple AI systems and need to document them all under Annex IV, contact us for volume pricing at hello@solidwaretools.com.

Request volume pricing
Reply within one business day

What AICheck guarantees, and what it does not

AICheck produces a document structured under Article 11 and Annex IV of Regulation (EU) 2024/1689 from the information you provide. The accuracy, truthfulness and completeness of that information is your responsibility as provider of the AI system.

We guarantee that the document structure follows Article 11 and Annex IV of Regulation (EU) 2024/1689 and that the legal references cited are correct as of the last verification date. We do not guarantee that a specific document will be accepted by a market surveillance authority in a given case, nor by a commercial buyer in a procurement process.

AICheck is not legal advice. For specific situations, consult a lawyer or specialised regulatory consultancy.

Frequently asked questions

What does "sufficient" mean for Art. 4 AI literacy?
Art. 4 sets a contextual standard — proportionate to (i) the technical knowledge, experience, education and training of the persons; (ii) the context the AI systems are to be used in; (iii) the persons or groups of persons on whom the AI systems are to be used. "Sufficient" means enough to understand how the AI works, what it can and cannot do, the relevant obligations under the Regulation, and the human-oversight expectations for the role.
Do I need to prove the AI literacy training?
The Regulation does not prescribe specific evidence. But in practice, in any inspection by a market surveillance authority, the absence of a documented training programme, attendance records, role-based materials, and refresh schedule will be read as a failure of the Art. 4 obligation. AI literacy is also evidence of compliance with related obligations — Art. 14 human oversight for high-risk AI cannot work without sufficient operator literacy.
Does Art. 4 apply to small companies?
Yes. Art. 4 does not have a size threshold. Every provider and every deployer is in scope, regardless of number of employees or revenue. The standard of "sufficient" is contextual and proportionate to the AI systems used — a 5-person company using a single low-risk chatbot has a lighter implementation burden than a multinational deploying multiple Annex III systems — but the obligation itself applies.
Is this a subscription?
No. One-time payment. The licence includes 30 days of editing and 10 regenerations. The PDF you download is yours to keep.
Can I request a refund?
Pursuant to Article 16(m) of Directive (EU) 2011/83 on consumer rights, by activating the licence you give express consent to the immediate generation of digital content, waiving the 14-day withdrawal right. Refunds are only accepted in the case of a reproducible technical failure.
What if the regulation changes?
If the regulation changes while your licence is active, you can regenerate the document with the updated version of the generator at no additional cost.
⚠️ Important notice: AICheck is a documentary self-assessment tool, not legal advice nor a third-party audit. The document under Article 11 and Annex IV of Regulation (EU) 2024/1689 is generated from the data you input. The accuracy of that data is your responsibility. AICheck does not replace a qualified professional assessment.

Don't wait for the consultancy. Generate the Annex IV documentation for your AI system in your browser in 45 minutes.

Twelve documents. Annex IV fully structured. Regulation (EU) 2024/1689. Your data does not leave your machine. The ZIP you download is yours to keep.

€249 one-time payment
12 professional documents · 45 minutes · No subscription · 100% in your browser
Generate dossier — €249
✓ Last regulatory verification: 11 May 2026 · No substantive changes detected · View history