Reg (EU) 2024/1689Generate dossier — €249
LIVE — Fines tracker · Obligations calendar · Transposition status — Updated weekly from EUR-Lex, Safety Gate, OEIL and 12 official sourcesView regulatory intelligence →

AI Act biometric and facial recognition: the Article 5 prohibitions, the Annex III point 1 high-risk categories, and the Article 26(10) authorisation regime.

Biometric AI is the most heavily regulated category in Regulation (EU) 2024/1689. Three overlapping regimes apply: Article 5 prohibits four biometric practices outright — untargeted facial-image scraping (5(1)(e)), workplace/education emotion recognition (5(1)(f)), biometric categorisation by sensitive attributes (5(1)(g)) and real-time remote biometric identification by law enforcement (5(1)(h)) with narrow exceptions. Annex III point 1 classifies three biometric categories as high-risk: remote biometric identification (excluding verification), biometric categorisation by sensitive or protected attributes, and emotion recognition. Article 26(10) sets ex-ante authorisation rules for post-remote biometric identification by law enforcement. The penalty for Art. 5 breach is up to €35M or 7% of turnover; for high-risk biometric, up to €15M or 3%.

Generate AI Act dossier — €249Free: check your AI system risk

€249 one-time payment · 12 PDF documents in ZIP · 45 minutes · 100% in your browser

Regulation (EU) 2024/1689 · Article 11 + Annex IV · 12 documents · 100% browser-side — your data never leaves your machine

The numbers

4 prohibitions
Art. 5(1)(e), (f), (g), (h) — facial scraping, workplace emotion, sensitive biometric categorisation, real-time RBI.
Annex III 1
Three high-risk biometric categories: remote ID (excluding verification), categorisation by sensitive attributes, emotion recognition.
Art. 26(10)
Ex-ante judicial or administrative authorisation for post-remote biometric ID by law enforcement (max 48 hours).

The four prohibitions, the three high-risk categories, and the authorisation regime

Biometric AI moves between regimes depending on what it does, where it does it, and who is using it. The same hardware can be prohibited in one context, high-risk in another and out of scope in a third.

Art. 5(1)(e) — Untargeted scraping of facial images (prohibited)
AI systems that create or expand facial-recognition databases through untargeted scraping of facial images from the internet or CCTV footage. Includes building or expanding the database; placing on the market, putting into service or use is prohibited. Penalty: €35M / 7% under Art. 99(3).
Art. 5(1)(f) — Emotion recognition in workplace and education (prohibited)
AI systems inferring emotions of a natural person in the areas of workplace and educational institutions. Exemptions: where the AI system is intended to be put in place or into the market for medical or safety reasons.
Art. 5(1)(g) — Biometric categorisation by sensitive attributes (prohibited)
Biometric categorisation systems that categorise individually natural persons based on biometric data to deduce or infer race, political opinions, trade-union membership, religious or philosophical beliefs, sex life or sexual orientation. Does NOT cover labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorising of biometric data in the area of law enforcement.
Art. 5(1)(h) — Real-time RBI by law enforcement (prohibited with exceptions)
"Real-time" remote biometric identification systems in publicly accessible spaces for law enforcement, unless strictly necessary for: (i) targeted search of victims (abduction, trafficking, sexual exploitation, missing persons); (ii) prevention of specific, substantial, imminent threat to life/physical safety or genuine terrorist-attack threat; (iii) localisation or identification of suspects for offences in Annex II punishable by ≥ 4 years' custody. Subject to Art. 5(2)–(7) safeguards, including prior judicial authorisation and fundamental rights impact assessment.
1a
Annex III 1(a) — Remote biometric identification (high-risk)
RBI systems where use is permitted under Union or national law. Explicit exclusion: AI systems intended to be used for biometric verification whose sole purpose is to confirm that a specific natural person is the person he or she claims to be — phone face-unlock, fingerprint-unlock, building access — are NOT high-risk under Annex III 1(a).
1b
Annex III 1(b) — Biometric categorisation by protected attributes (high-risk)
AI systems intended to be used for biometric categorisation according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics. Categorisation by Art. 5(1)(g) protected attributes is prohibited; categorisation by other sensitive or protected attributes is high-risk.
1c
Annex III 1(c) — Emotion recognition (high-risk)
AI systems intended to be used for emotion recognition. Workplace and education are prohibited under Art. 5(1)(f); other contexts (e.g., retail, healthcare outside medical-device regime, customer experience) are high-risk under Annex III 1(c) when lawful.
Art. 26(10) — Post-remote biometric identification authorisation
In the framework of an investigation for the targeted search of a person suspected or convicted, the deployer of a high-risk AI system for post-remote biometric identification shall request an authorisation — ex ante, or without undue delay and no later than 48 hours — from a judicial or independent administrative authority whose decision is binding and subject to judicial review. Exception: initial identification of a potential suspect based on objective and verifiable facts directly linked to the offence. Limited to strictly necessary use for a specific criminal offence.
Conformity assessment route
Annex III point 1 systems may go through Annex VI internal control (if harmonised standards applied) OR Annex VII assessment with notified body — Art. 43(1). This is the only Annex III category where the notified-body route is on the table. For Annex III points 2 to 8, Annex VI internal control is the only path.

Three common mistakes

COMMON MISTAKE

"Smartphone face-unlock is high-risk biometric identification"

Annex III 1(a) excludes "AI systems intended to be used for biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be". Face-unlock, fingerprint-unlock and similar one-to-one verification — confirming identity rather than identifying from a database — are out of Annex III 1(a). The exclusion does not extend to biometric categorisation (1(b)) or emotion recognition (1(c)).

COMMON MISTAKE

"Scraping facial images is fine if used for law enforcement"

Art. 5(1)(e) prohibits the placing on the market, the putting into service or the use of AI systems that create or expand facial-recognition databases through untargeted scraping of facial images from the internet or CCTV footage. The Art. 5(1)(e) prohibition is unconditional — there is no law-enforcement carve-out. The carve-outs in Art. 5(1)(h) (real-time RBI) and the categorisation language in 5(1)(g) do not extend to the scraping prohibition.

COMMON MISTAKE

"GDPR is the only law we need to think about for biometric AI"

GDPR Art. 9 covers processing of biometric data as a special category of personal data and requires a lawful basis under Art. 9(2). The AI Act adds independent obligations: Art. 5 prohibitions for the four practices, Annex III 1 high-risk classification triggering Art. 11 + Annex IV documentation, Art. 27 FRIA where applicable, Art. 26(10) authorisation for post-RBI by law enforcement. Both regulations apply in parallel under Art. 2(7) of the AI Act.

Does the AI Act apply to your system?

Answer these four questions to determine your obligations.

Does your system use machine learning, logic-based, or statistical approaches?
Art. 3(1) — definition of "AI system"
Is the system placed on the EU market or does its output affect persons in the EU?
Art. 2(1) — territorial scope (extraterritorial via 2(1)(c))
Is your system used in any Annex III domain? (employment, credit, education, law enforcement, migration, justice, critical infrastructure, biometrics)
Art. 6(2) + Annex III — high-risk classification
Are you the provider (developer) or the deployer (user) of the system?
Art. 3(3) provider · Art. 3(4) deployer — different obligations

Take the full AI Act risk classification test →

What the ZIP contains

12 PDF documents generated from your inputs. Each cites the article of Regulation (EU) 2024/1689 it fulfils.

1

Risk Classification Report

Identifies whether your system is prohibited (Art. 5), high-risk (Art. 6 + Annex III) or subject to transparency obligations (Art. 50).

2

Technical Documentation

The 9 blocks of Annex IV in full: system description, training data, validation, performance metrics, risk management, human oversight. Art. 11 + Annex IV.

3

EU Declaration of Conformity

Signable document conforming to Art. 47 and Annex V.

4

Compliance Calendar

Key application dates: 2 Feb 2025, 2 Aug 2025, 2 Aug 2026, 2 Aug 2027. Art. 113.

5

Conformity Sheet

Executive summary of compliance status for authorities or commercial buyers. Art. 43 procedure.

6

Quality Management System (QMS)

QMS structure covering the 13 aspects required by Art. 17.

7

Deployer Instructions

Document for the entity deploying your system, conforming to Art. 13.

8

Evidence Checklist

Verifiable evidence list, cross-referenced to every Annex IV block.

9

Incident Report Template

Notification protocol conforming to Art. 73 (15 days general / 10 days death / 2 days widespread).

10

AI Literacy Programme

Training plan conforming to Art. 4, in force since 2 February 2025.

11

Post-Market Monitoring Plan

Plan structure required by Art. 72 and integrated into the technical documentation under Annex IV(9).

12

Fundamental Rights Impact Assessment (FRIA)

Template under Art. 27 for public bodies, private entities providing public services, and Annex III 5(b)(c) deployers.

See before you buy — Download a sample dossier (PDF, fictional company) — Real structure, real articles, real format. Fictional data.

Generated from your inputs, in your browser. No data leaves your machine.

What you pay

🧾 AI ACT COMPLIANCE CONSULTANCY
€5,000–€15,000
3–6 months. They explain the obligations to you.
✓ AICHECK
€249
12 documents. 45 minutes. Solves the documentation.

Technical documentation and conformity assessment: two layers

● LAYER 1

Technical documentation — Annex IV

12 documents. 45 minutes. €249. The documentation your system needs before being placed on the market.

∅ LAYER 2

Conformity assessment by notified body

If your system falls under Art. 43(1) (Annex III point 1 biometrics with notified-body route, or Annex I products), you will need third-party conformity assessment. That is a separate process. AICheck does not replace it.

We do not sell audits. We do not sell consultancy. We sell the tool that structures your documentation under Annex IV.

Penalty regime

Article 99 of Regulation (EU) 2024/1689. Chapter XII (Penalties) applies from 2 August 2025.

🇪🇺
Non-compliance with prohibited practices (Art. 5)
€35M / 7%

Art. 99(3). Up to €35 million or 7% of total worldwide annual turnover, whichever is higher. For SMEs and start-ups: whichever is lower (Art. 99(6)).

🇪🇺
Non-compliance with operator obligations (high-risk, transparency, deployer)
€15M / 3%

Art. 99(4). Includes failure to draw up technical documentation under Art. 11 + Annex IV. Covers obligations of providers (Art. 16), deployers (Art. 26), authorised representatives (Art. 22), importers (Art. 23), distributors (Art. 24), notified bodies (Art. 31, 33, 34) and transparency under Art. 50.

🇪🇺
Supply of incorrect, incomplete or misleading information
€7.5M / 1%

Art. 99(5). Applies when information provided to notified bodies or national competent authorities is wrong or misleading.

Documenting 5 or more AI systems?

If you operate multiple AI systems and need to document them all under Annex IV, contact us for volume pricing at hello@solidwaretools.com.

Request volume pricing
Reply within one business day

What AICheck guarantees, and what it does not

AICheck produces a document structured under Article 11 and Annex IV of Regulation (EU) 2024/1689 from the information you provide. The accuracy, truthfulness and completeness of that information is your responsibility as provider of the AI system.

We guarantee that the document structure follows Article 11 and Annex IV of Regulation (EU) 2024/1689 and that the legal references cited are correct as of the last verification date. We do not guarantee that a specific document will be accepted by a market surveillance authority in a given case, nor by a commercial buyer in a procurement process.

AICheck is not legal advice. For specific situations, consult a lawyer or specialised regulatory consultancy.

Frequently asked questions

Is workplace biometric attendance allowed?
Biometric attendance through one-to-one verification — confirming the identity claimed by the employee at clock-in — is excluded from Annex III 1(a) and is not per se prohibited under Art. 5. Two important caveats: (i) Art. 5(1)(f) prohibits inferring emotions in workplace, so attendance systems that include emotion analysis are prohibited; (ii) Art. 5(1)(g) prohibits biometric categorisation by sensitive attributes — any system inferring protected characteristics from biometric data is prohibited. GDPR Art. 9 applies in parallel.
Is facial recognition at borders or airports allowed?
Annex III point 7 covers migration, asylum and border control management — including "AI systems intended to be used by or on behalf of competent public authorities, or by Union institutions, bodies, offices or agencies, in the context of migration, asylum or border control management, for the purpose of detecting, recognising or identifying natural persons, with the exception of the verification of travel documents". These are high-risk under Art. 6(2). Real-time RBI in publicly accessible spaces remains subject to Art. 5(1)(h) restrictions for law-enforcement purposes.
Can we use emotion recognition for customer experience in retail?
Art. 5(1)(f) prohibits emotion recognition in workplace and educational institutions — customer-facing retail outside the workplace context is not within the prohibition. Outside workplace/education, emotion recognition is high-risk under Annex III 1(c) and triggers Art. 11 + Annex IV documentation, Art. 17 QMS, Art. 43 conformity assessment, Art. 26 deployer obligations, plus Art. 50(3) transparency obligation to inform exposed persons. GDPR Art. 9 applies to biometric data processing in parallel.
Is this a subscription?
No. One-time payment. The licence includes 30 days of editing and 10 regenerations. The PDF you download is yours to keep.
Can I request a refund?
Pursuant to Article 16(m) of Directive (EU) 2011/83 on consumer rights, by activating the licence you give express consent to the immediate generation of digital content, waiving the 14-day withdrawal right. Refunds are only accepted in the case of a reproducible technical failure.
What if the regulation changes?
If the regulation changes while your licence is active, you can regenerate the document with the updated version of the generator at no additional cost.
⚠️ Important notice: AICheck is a documentary self-assessment tool, not legal advice nor a third-party audit. The document under Article 11 and Annex IV of Regulation (EU) 2024/1689 is generated from the data you input. The accuracy of that data is your responsibility. AICheck does not replace a qualified professional assessment.

Don't wait for the consultancy. Generate the Annex IV documentation for your AI system in your browser in 45 minutes.

Twelve documents. Annex IV fully structured. Regulation (EU) 2024/1689. Your data does not leave your machine. The ZIP you download is yours to keep.

€249 one-time payment
12 professional documents · 45 minutes · No subscription · 100% in your browser
Generate dossier — €249
✓ Last regulatory verification: 11 May 2026 · No substantive changes detected · View history