Reg (EU) 2024/1689Generate dossier — €249
LIVE — Fines tracker · Obligations calendar · Transposition status — Updated weekly from EUR-Lex, Safety Gate, OEIL and 12 official sourcesView regulatory intelligence →

How to determine the risk classification of your AI system under the AI Act: a step-by-step decision tree across Articles 5, 6, 50 and Annex III.

Risk classification under Regulation (EU) 2024/1689 is binary: a system is either in scope of an article, or it is not. There are no grey zones in the text itself. You walk the tree top-down: first Art. 5 (prohibited — eliminate), then Art. 6(1) (safety component of Annex I product) and Art. 6(2) + Annex III (8 high-risk domains), then Art. 6(3) derogation if applicable, then Art. 50 transparency obligations, and finally Art. 4 AI literacy which applies in every case. AICheck produces the Risk Classification Report that documents your decision against the regulation.

Generate AI Act dossier — €249Free: check your AI system risk

€249 one-time payment · 12 PDF documents in ZIP · 45 minutes · 100% in your browser

Regulation (EU) 2024/1689 · Article 11 + Annex IV · 12 documents · 100% browser-side — your data never leaves your machine

The numbers

8 prohibited
Practices banned under Art. 5(1)(a)–(h). Penalty: up to €35M or 7% of worldwide turnover (Art. 99(3)).
8 domains
Annex III high-risk areas under Art. 6(2): biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice.
4 transparency
Art. 50: chatbots (50.1), synthetic content (50.2), emotion recognition / biometric categorisation (50.3), deep fakes (50.4).

The decision tree, step by step

You must walk the tree in order. A "yes" at an earlier step changes everything downstream — and you must document the decision before placing the system on the market (Art. 6(4) for Annex III).

1
Step 1 — Is the system an "AI system"?
Art. 3(1) defines AI system as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions". Pure rule-based scripts without inference are out of scope.
2
Step 2 — Does it fall under any Art. 5 prohibition?
Eight practices are banned: (a) subliminal/manipulative, (b) exploiting vulnerabilities, (c) social scoring, (d) predictive policing based solely on profiling, (e) untargeted facial-image scraping, (f) emotion recognition in workplace/education, (g) biometric categorisation by sensitive attributes, (h) real-time RBI by law enforcement. If yes — stop. Placing on market is prohibited.
3
Step 3 — Is it high-risk under Art. 6(1)?
Art. 6(1) — both conditions must be met: (a) the AI is intended to be used as safety component of a product covered by Annex I harmonisation legislation, or is itself such a product; AND (b) that product requires third-party conformity assessment under that legislation. Examples: machinery, toys, lifts, medical devices, in vitro diagnostic devices, automotive. If yes — high-risk under Art. 6(1).
4
Step 4 — Is it used in any of the 8 Annex III domains?
Art. 6(2) refers to Annex III: 1. Biometrics; 2. Critical infrastructure; 3. Education; 4. Employment; 5. Essential services (5(a) public benefits, 5(b) credit scoring, 5(c) life/health insurance, 5(d) emergency dispatch); 6. Law enforcement; 7. Migration; 8. Justice + democratic processes. If yes — high-risk under Art. 6(2). Annex III is exhaustive at this point in time; it can only be amended by delegated act under Art. 7.
5
Step 5 — Does the Art. 6(3) derogation apply?
An Annex III system is NOT high-risk if it does not pose a significant risk AND falls into one of: (a) narrow procedural task; (b) improvement of a previously completed human activity; (c) detecting decision patterns without replacing human review; (d) preparatory task. Important: an Annex III system that profiles natural persons is ALWAYS high-risk (Art. 6(3), final subparagraph). The provider's no-high-risk assessment must be documented (Art. 6(4)) and the provider must still register in the EU database (Art. 49(2)).
6
Step 6 — Are there Art. 50 transparency obligations?
Independent of high-risk classification. Art. 50 triggers: AI directly interacting with natural persons (chatbots) → must disclose AI nature; generators of synthetic audio/image/video/text → mark output as AI-generated in machine-readable format; emotion recognition or biometric categorisation → inform exposed persons; deep fakes → disclose. Apply from 2 August 2026.

Three common mistakes

COMMON MISTAKE

"If we are outside Annex III, we have zero obligations"

Art. 5 (prohibited practices) applies to everyone — there is no Annex III exemption. Art. 4 (AI literacy) applies to every provider and deployer. Art. 50 transparency applies to chatbots, synthetic content and deep fakes regardless of high-risk status. "Outside Annex III" only means you escape Chapter III, Section 2 — not the whole regulation.

COMMON MISTAKE

"Classification is the provider's decision alone"

Annex III is regulatory, not discretionary. The provider documents whether the system fits a category, but cannot reclassify a credit-scoring system out of Annex III 5(b) by labelling it differently. Only the Art. 6(3) derogation allows downgrading, and the provider's assessment must be documented and produced on request (Art. 6(4)).

COMMON MISTAKE

"The Art. 6(3) derogation removes all documentation requirements"

No. Even if your Annex III system is non-high-risk under Art. 6(3), you must still document the assessment before placing on market (Art. 6(4)) and register the system in the EU database (Art. 49(2)). And if the system profiles natural persons, the derogation is unavailable — the system is always high-risk.

Does the AI Act apply to your system?

Answer these four questions to determine your obligations.

Does your system use machine learning, logic-based, or statistical approaches?
Art. 3(1) — definition of "AI system"
Is the system placed on the EU market or does its output affect persons in the EU?
Art. 2(1) — territorial scope (extraterritorial via 2(1)(c))
Is your system used in any Annex III domain? (employment, credit, education, law enforcement, migration, justice, critical infrastructure, biometrics)
Art. 6(2) + Annex III — high-risk classification
Are you the provider (developer) or the deployer (user) of the system?
Art. 3(3) provider · Art. 3(4) deployer — different obligations

Take the full AI Act risk classification test →

What the ZIP contains

12 PDF documents generated from your inputs. Each cites the article of Regulation (EU) 2024/1689 it fulfils.

1

Risk Classification Report

Identifies whether your system is prohibited (Art. 5), high-risk (Art. 6 + Annex III) or subject to transparency obligations (Art. 50).

2

Technical Documentation

The 9 blocks of Annex IV in full: system description, training data, validation, performance metrics, risk management, human oversight. Art. 11 + Annex IV.

3

EU Declaration of Conformity

Signable document conforming to Art. 47 and Annex V.

4

Compliance Calendar

Key application dates: 2 Feb 2025, 2 Aug 2025, 2 Aug 2026, 2 Aug 2027. Art. 113.

5

Conformity Sheet

Executive summary of compliance status for authorities or commercial buyers. Art. 43 procedure.

6

Quality Management System (QMS)

QMS structure covering the 13 aspects required by Art. 17.

7

Deployer Instructions

Document for the entity deploying your system, conforming to Art. 13.

8

Evidence Checklist

Verifiable evidence list, cross-referenced to every Annex IV block.

9

Incident Report Template

Notification protocol conforming to Art. 73 (15 days general / 10 days death / 2 days widespread).

10

AI Literacy Programme

Training plan conforming to Art. 4, in force since 2 February 2025.

11

Post-Market Monitoring Plan

Plan structure required by Art. 72 and integrated into the technical documentation under Annex IV(9).

12

Fundamental Rights Impact Assessment (FRIA)

Template under Art. 27 for public bodies, private entities providing public services, and Annex III 5(b)(c) deployers.

See before you buy — Download a sample dossier (PDF, fictional company) — Real structure, real articles, real format. Fictional data.

Generated from your inputs, in your browser. No data leaves your machine.

What you pay

🧾 AI ACT COMPLIANCE CONSULTANCY
€5,000–€15,000
3–6 months. They explain the obligations to you.
✓ AICHECK
€249
12 documents. 45 minutes. Solves the documentation.

Technical documentation and conformity assessment: two layers

● LAYER 1

Technical documentation — Annex IV

12 documents. 45 minutes. €249. The documentation your system needs before being placed on the market.

∅ LAYER 2

Conformity assessment by notified body

If your system falls under Art. 43(1) (Annex III point 1 biometrics with notified-body route, or Annex I products), you will need third-party conformity assessment. That is a separate process. AICheck does not replace it.

We do not sell audits. We do not sell consultancy. We sell the tool that structures your documentation under Annex IV.

Penalty regime

Article 99 of Regulation (EU) 2024/1689. Chapter XII (Penalties) applies from 2 August 2025.

🇪🇺
Non-compliance with prohibited practices (Art. 5)
€35M / 7%

Art. 99(3). Up to €35 million or 7% of total worldwide annual turnover, whichever is higher. For SMEs and start-ups: whichever is lower (Art. 99(6)).

🇪🇺
Non-compliance with operator obligations (high-risk, transparency, deployer)
€15M / 3%

Art. 99(4). Includes failure to draw up technical documentation under Art. 11 + Annex IV. Covers obligations of providers (Art. 16), deployers (Art. 26), authorised representatives (Art. 22), importers (Art. 23), distributors (Art. 24), notified bodies (Art. 31, 33, 34) and transparency under Art. 50.

🇪🇺
Supply of incorrect, incomplete or misleading information
€7.5M / 1%

Art. 99(5). Applies when information provided to notified bodies or national competent authorities is wrong or misleading.

Documenting 5 or more AI systems?

If you operate multiple AI systems and need to document them all under Annex IV, contact us for volume pricing at hello@solidwaretools.com.

Request volume pricing
Reply within one business day

What AICheck guarantees, and what it does not

AICheck produces a document structured under Article 11 and Annex IV of Regulation (EU) 2024/1689 from the information you provide. The accuracy, truthfulness and completeness of that information is your responsibility as provider of the AI system.

We guarantee that the document structure follows Article 11 and Annex IV of Regulation (EU) 2024/1689 and that the legal references cited are correct as of the last verification date. We do not guarantee that a specific document will be accepted by a market surveillance authority in a given case, nor by a commercial buyer in a procurement process.

AICheck is not legal advice. For specific situations, consult a lawyer or specialised regulatory consultancy.

Frequently asked questions

How is risk determined under the AI Act?
Risk classification is based on the AI system's intended purpose and the area of use, not on the technology used. Article 5 lists 8 prohibited practices. Article 6(1) covers AI as safety components of products under Annex I harmonisation legislation. Article 6(2) refers to the 8 domains in Annex III. Article 6(3) provides a narrow derogation. Article 50 adds transparency obligations independently of risk classification.
Can a high-risk system be reclassified as non-high-risk under Art. 6(3)?
Yes, but only if (a) it falls into one of four specific cases (narrow procedural task, improvement of human activity, decision-pattern detection without replacing human review, preparatory task) AND (b) it does not pose a significant risk to health, safety or fundamental rights. If the system performs profiling of natural persons, the derogation is unavailable and it remains high-risk (Art. 6(3), final subparagraph). The provider must document the assessment under Art. 6(4) and register under Art. 49(2).
What if my system falls into multiple Annex III categories?
Obligations cumulate. A single system covering, for example, employment screening (Annex III.4) and credit scoring (Annex III.5(b)) must comply with all the requirements applicable to high-risk systems. The technical documentation under Annex IV is drawn up once, but must address each use case. The FRIA under Art. 27 is required for any Annex III 5(b) or 5(c) deployment by public bodies or private entities providing public services.
Is this a subscription?
No. One-time payment. The licence includes 30 days of editing and 10 regenerations. The PDF you download is yours to keep.
Can I request a refund?
Pursuant to Article 16(m) of Directive (EU) 2011/83 on consumer rights, by activating the licence you give express consent to the immediate generation of digital content, waiving the 14-day withdrawal right. Refunds are only accepted in the case of a reproducible technical failure.
What if the regulation changes?
If the regulation changes while your licence is active, you can regenerate the document with the updated version of the generator at no additional cost.
⚠️ Important notice: AICheck is a documentary self-assessment tool, not legal advice nor a third-party audit. The document under Article 11 and Annex IV of Regulation (EU) 2024/1689 is generated from the data you input. The accuracy of that data is your responsibility. AICheck does not replace a qualified professional assessment.

Don't wait for the consultancy. Generate the Annex IV documentation for your AI system in your browser in 45 minutes.

Twelve documents. Annex IV fully structured. Regulation (EU) 2024/1689. Your data does not leave your machine. The ZIP you download is yours to keep.

€249 one-time payment
12 professional documents · 45 minutes · No subscription · 100% in your browser
Generate dossier — €249
✓ Last regulatory verification: 11 May 2026 · No substantive changes detected · View history