Reg (EU) 2024/1689Generate dossier — €249
LIVE — Fines tracker · Obligations calendar · Transposition status — Updated weekly from EUR-Lex, Safety Gate, OEIL and 12 official sourcesView regulatory intelligence →

AI Act Article 5: the complete list of 8 prohibited AI practices, in force since 2 February 2025 across the EU.

Article 5 of Regulation (EU) 2024/1689 lists eight AI practices that are banned outright — placing on the market, putting into service or using these systems in the Union is a regulatory offence. Non-compliance triggers the highest tier of fines: up to €35 million or 7% of worldwide annual turnover (Art. 99(3)). The prohibitions apply from 2 February 2025 under Art. 113(a). AICheck produces the Risk Classification Report that documents your verification against the eight practices.

Generate AI Act dossier — €249Free: check your AI system risk

€249 one-time payment · 12 PDF documents in ZIP · 45 minutes · 100% in your browser

Regulation (EU) 2024/1689 · Article 11 + Annex IV · 12 documents · 100% browser-side — your data never leaves your machine

The numbers

8 practices
Banned under Art. 5(1)(a) to (h). Placing on the market, putting into service and using are all prohibited.
2 Feb 2025
Application date under Art. 113(a). Chapter II has been in force since this date.
€35M / 7%
Art. 99(3). Up to €35M or 7% of worldwide annual turnover, whichever is higher (lower for SMEs under Art. 99(6)).

The 8 prohibited practices, verbatim by sub-paragraph

Each prohibition has its own conditions. Reading "the system uses biometrics" or "the system scores people" is not enough — you must check the specific elements that trigger the ban.

a
Art. 5(1)(a) — Subliminal / manipulative / deceptive techniques
An AI system that deploys subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting behaviour by appreciably impairing the ability to make an informed decision, causing or reasonably likely to cause significant harm.
b
Art. 5(1)(b) — Exploitation of vulnerabilities
An AI system that exploits vulnerabilities of a natural person or specific group due to age, disability or a specific social or economic situation, with the objective or effect of materially distorting behaviour, causing or reasonably likely to cause significant harm.
c
Art. 5(1)(c) — Social scoring
AI systems for the evaluation or classification of natural persons or groups over a period of time, based on social behaviour or known/inferred/predicted personal or personality characteristics, where the social score leads to detrimental treatment in unrelated contexts or unjustified/disproportionate treatment.
d
Art. 5(1)(d) — Predictive policing based solely on profiling
AI systems for risk assessment of natural persons to predict criminal-offence risk, based solely on profiling or assessing personality traits and characteristics. Does NOT apply to systems supporting human assessment based on objective and verifiable facts directly linked to a criminal activity.
e
Art. 5(1)(e) — Untargeted scraping of facial images
AI systems that create or expand facial-recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
f
Art. 5(1)(f) — Emotion recognition in workplace and education
AI systems to infer emotions of a natural person in the areas of workplace and educational institutions, except where intended for medical or safety reasons.
g
Art. 5(1)(g) — Biometric categorisation by sensitive attributes
Biometric categorisation systems that categorise natural persons individually based on biometric data to deduce or infer race, political opinions, trade-union membership, religious or philosophical beliefs, sex life or sexual orientation. Does not cover labelling or filtering of lawfully acquired biometric datasets.
h
Art. 5(1)(h) — Real-time RBI by law enforcement in public spaces
'Real-time' remote biometric identification systems in publicly accessible spaces for law enforcement, unless strictly necessary for: (i) targeted search of victims (abduction, trafficking, sexual exploitation, missing persons); (ii) prevention of specific, substantial, imminent threat to life/physical safety or genuine threat of terrorist attack; (iii) localisation or identification of suspects for offences in Annex II punishable by ≥ 4 years' custody. Subject to Art. 5(2)–(7) safeguards.

Three common mistakes

COMMON MISTAKE

"We only use it internally — Art. 5 does not apply"

Art. 5 prohibits the placing on the market, the putting into service AND the use of these systems. Internal deployment within an EU organisation is "putting into service" or "use". The geographic scope of Art. 2(1) catches any deployer in the Union.

COMMON MISTAKE

"Our recommendation algorithm is manipulative under Art. 5(1)(a)"

Art. 5(1)(a) requires three cumulative elements: (i) subliminal techniques beyond consciousness OR purposefully manipulative/deceptive techniques; (ii) materially distorting behaviour by appreciably impairing informed decision-making; (iii) causing or reasonably likely to cause significant harm. Ordinary commercial recommendation does not meet these elements; the bar is high and contested.

COMMON MISTAKE

"Workplace wellbeing apps that detect stress are fine"

Art. 5(1)(f) prohibits inferring emotions of natural persons in the areas of workplace and educational institutions. The only carve-outs are medical or safety reasons. A wellbeing tool that infers stress, mood or burnout in a workplace context is on the wrong side of Art. 5(1)(f) unless it qualifies as a medical device.

Does the AI Act apply to your system?

Answer these four questions to determine your obligations.

Does your system use machine learning, logic-based, or statistical approaches?
Art. 3(1) — definition of "AI system"
Is the system placed on the EU market or does its output affect persons in the EU?
Art. 2(1) — territorial scope (extraterritorial via 2(1)(c))
Is your system used in any Annex III domain? (employment, credit, education, law enforcement, migration, justice, critical infrastructure, biometrics)
Art. 6(2) + Annex III — high-risk classification
Are you the provider (developer) or the deployer (user) of the system?
Art. 3(3) provider · Art. 3(4) deployer — different obligations

Take the full AI Act risk classification test →

What the ZIP contains

12 PDF documents generated from your inputs. Each cites the article of Regulation (EU) 2024/1689 it fulfils.

1

Risk Classification Report

Identifies whether your system is prohibited (Art. 5), high-risk (Art. 6 + Annex III) or subject to transparency obligations (Art. 50).

2

Technical Documentation

The 9 blocks of Annex IV in full: system description, training data, validation, performance metrics, risk management, human oversight. Art. 11 + Annex IV.

3

EU Declaration of Conformity

Signable document conforming to Art. 47 and Annex V.

4

Compliance Calendar

Key application dates: 2 Feb 2025, 2 Aug 2025, 2 Aug 2026, 2 Aug 2027. Art. 113.

5

Conformity Sheet

Executive summary of compliance status for authorities or commercial buyers. Art. 43 procedure.

6

Quality Management System (QMS)

QMS structure covering the 13 aspects required by Art. 17.

7

Deployer Instructions

Document for the entity deploying your system, conforming to Art. 13.

8

Evidence Checklist

Verifiable evidence list, cross-referenced to every Annex IV block.

9

Incident Report Template

Notification protocol conforming to Art. 73 (15 days general / 10 days death / 2 days widespread).

10

AI Literacy Programme

Training plan conforming to Art. 4, in force since 2 February 2025.

11

Post-Market Monitoring Plan

Plan structure required by Art. 72 and integrated into the technical documentation under Annex IV(9).

12

Fundamental Rights Impact Assessment (FRIA)

Template under Art. 27 for public bodies, private entities providing public services, and Annex III 5(b)(c) deployers.

See before you buy — Download a sample dossier (PDF, fictional company) — Real structure, real articles, real format. Fictional data.

Generated from your inputs, in your browser. No data leaves your machine.

What you pay

🧾 AI ACT COMPLIANCE CONSULTANCY
€5,000–€15,000
3–6 months. They explain the obligations to you.
✓ AICHECK
€249
12 documents. 45 minutes. Solves the documentation.

Technical documentation and conformity assessment: two layers

● LAYER 1

Technical documentation — Annex IV

12 documents. 45 minutes. €249. The documentation your system needs before being placed on the market.

∅ LAYER 2

Conformity assessment by notified body

If your system falls under Art. 43(1) (Annex III point 1 biometrics with notified-body route, or Annex I products), you will need third-party conformity assessment. That is a separate process. AICheck does not replace it.

We do not sell audits. We do not sell consultancy. We sell the tool that structures your documentation under Annex IV.

Penalty regime

Article 99 of Regulation (EU) 2024/1689. Chapter XII (Penalties) applies from 2 August 2025.

🇪🇺
Non-compliance with prohibited practices (Art. 5)
€35M / 7%

Art. 99(3). Up to €35 million or 7% of total worldwide annual turnover, whichever is higher. For SMEs and start-ups: whichever is lower (Art. 99(6)).

🇪🇺
Non-compliance with operator obligations (high-risk, transparency, deployer)
€15M / 3%

Art. 99(4). Includes failure to draw up technical documentation under Art. 11 + Annex IV. Covers obligations of providers (Art. 16), deployers (Art. 26), authorised representatives (Art. 22), importers (Art. 23), distributors (Art. 24), notified bodies (Art. 31, 33, 34) and transparency under Art. 50.

🇪🇺
Supply of incorrect, incomplete or misleading information
€7.5M / 1%

Art. 99(5). Applies when information provided to notified bodies or national competent authorities is wrong or misleading.

Documenting 5 or more AI systems?

If you operate multiple AI systems and need to document them all under Annex IV, contact us for volume pricing at hello@solidwaretools.com.

Request volume pricing
Reply within one business day

What AICheck guarantees, and what it does not

AICheck produces a document structured under Article 11 and Annex IV of Regulation (EU) 2024/1689 from the information you provide. The accuracy, truthfulness and completeness of that information is your responsibility as provider of the AI system.

We guarantee that the document structure follows Article 11 and Annex IV of Regulation (EU) 2024/1689 and that the legal references cited are correct as of the last verification date. We do not guarantee that a specific document will be accepted by a market surveillance authority in a given case, nor by a commercial buyer in a procurement process.

AICheck is not legal advice. For specific situations, consult a lawyer or specialised regulatory consultancy.

Frequently asked questions

What is the maximum fine for breaching Art. 5?
Non-compliance with Art. 5 is subject to administrative fines of up to €35,000,000 or, if the offender is an undertaking, up to 7% of total worldwide annual turnover for the preceding financial year — whichever is higher (Art. 99(3)). For SMEs and start-ups, the lower of the two amounts applies (Art. 99(6)).
When did Art. 5 enter into force?
Art. 5 is in Chapter II of Regulation (EU) 2024/1689. Chapter II applies from 2 February 2025 under Art. 113(a). Penalties under Chapter XII (Art. 99) apply from 2 August 2025 under Art. 113(b). National competent authorities for Art. 5 enforcement must be designated by Member States by the date of application.
Is workplace stress detection prohibited under Art. 5(1)(f)?
Article 5(1)(f) prohibits AI systems used to infer emotions of natural persons in the areas of workplace and educational institutions, except where intended for medical or safety reasons. Stress detection that fits the AI-system definition under Art. 3(1) and infers a stress emotion in workplace context falls within Art. 5(1)(f) unless it qualifies as a medical device. Wellbeing or productivity branding does not change the legal classification.
Is this a subscription?
No. One-time payment. The licence includes 30 days of editing and 10 regenerations. The PDF you download is yours to keep.
Can I request a refund?
Pursuant to Article 16(m) of Directive (EU) 2011/83 on consumer rights, by activating the licence you give express consent to the immediate generation of digital content, waiving the 14-day withdrawal right. Refunds are only accepted in the case of a reproducible technical failure.
What if the regulation changes?
If the regulation changes while your licence is active, you can regenerate the document with the updated version of the generator at no additional cost.
⚠️ Important notice: AICheck is a documentary self-assessment tool, not legal advice nor a third-party audit. The document under Article 11 and Annex IV of Regulation (EU) 2024/1689 is generated from the data you input. The accuracy of that data is your responsibility. AICheck does not replace a qualified professional assessment.

Don't wait for the consultancy. Generate the Annex IV documentation for your AI system in your browser in 45 minutes.

Twelve documents. Annex IV fully structured. Regulation (EU) 2024/1689. Your data does not leave your machine. The ZIP you download is yours to keep.

€249 one-time payment
12 professional documents · 45 minutes · No subscription · 100% in your browser
Generate dossier — €249
✓ Last regulatory verification: 11 May 2026 · No substantive changes detected · View history