If your medical AI needs a notified body under MDR, the AI Act almost certainly treats it as high-risk. That brings a defined set of extra obligations. Data governance, transparency, human oversight, accuracy and robustness, logging, post-market monitoring. For MedTech founders who already run an MDR-compliant QMS, most of these can be folded into the existing technical file rather than built as a separate stack.

By Tibor Zechmeister and Felix Lenhard.

TL;DR

  • The AI Act (Regulation (EU) 2024/1689) treats medical AI requiring MDR notified-body assessment as high-risk.
  • High-risk AI carries specific obligations around data quality, documentation, transparency, human oversight, accuracy and robustness, and logging.
  • Most of these overlap heavily with existing MDR obligations. EN ISO 14971 risk management, EN 62304 software lifecycle, Annex II technical documentation.
  • The practical path is to fold AI Act obligations into the existing MDR technical file, not to build a parallel file.
  • A few AI Act obligations are native and have no direct MDR counterpart. Those need a short delta checklist.
  • The most expensive mistake is treating the AI Act as separate from MDR. It is an overlay, not a new project.

Why this matters

MedTech founders are already carrying the weight of MDR. Adding a second regulation on top feels like the moment the budget breaks. It does not have to.

The design of the AI Act, as it applies to medical devices, is meant to ride on top of MDR. The high-risk obligations are framed so that notified bodies assessing MDR conformity can also cover most of the AI Act assessment, and so that the documentation you maintain under MDR Annex II is the natural home for AI Act content.

The pattern that works: one QMS, one technical file, two regulatory overlays. The pattern that fails: two parallel compliance operations that never talk to each other.

What MDR actually says

Annex II. Technical documentation. Every MDR device file must describe the device, its intended purpose, its design and manufacturing, the general safety and performance requirements and how they are met, benefit-risk analysis, verification and validation, and (for software) development lifecycle artefacts. Annex II is structured enough to absorb AI-specific content. Training data descriptions, model architecture, validation results. Without breaking.

Article 10. General obligations of manufacturers. Manufacturers must establish, document, implement, maintain, keep up to date and continually improve a quality management system. EN ISO 13485:2016+A11:2021 is the standard that operationalises this under MDR.

Article 83. Post-market surveillance system. Every manufacturer must plan, establish, document, implement, maintain and update a post-market surveillance system proportionate to the risk class and appropriate for the type of device. The PMS system is the natural home for AI-specific monitoring. Drift, performance degradation, population shift.

Annex I GSPR. The general safety and performance requirements include specific software provisions (§17.1–17.4) covering development lifecycle, risk management, verification and validation, and information security. Risk management under EN ISO 14971:2019+A11:2021 already forces you to think about what happens when the model is wrong.

What the AI Act adds

Regulation (EU) 2024/1689. The AI Act. The core obligations for high-risk AI systems are :

  • Risk management system specific to the AI system. A continuous, iterative process identifying and mitigating risks that the AI system poses.
  • Data and data governance. Training, validation and test data must be relevant, representative, free of errors to the extent possible, and complete. Data governance practices must address data collection, data preparation, assumptions, biases, and data gaps.
  • Technical documentation. A specific documentation set demonstrating conformity with AI Act requirements.
  • Record-keeping (logging). The system must enable automatic recording of events relevant to traceability, monitoring, and post-market surveillance.
  • Transparency and provision of information to users. The system's capabilities, limitations, intended purpose, and performance characteristics must be clearly communicated.
  • Human oversight. The system must be designed so that it can be effectively overseen by natural persons, who can intervene, override, or disregard outputs.
  • Accuracy, robustness and cybersecurity. The system must meet appropriate levels of these, declared and maintained.
  • Quality management system. Providers of high-risk AI systems must operate a quality management system covering the above.
  • Post-market monitoring. A system to collect and analyse AI-system performance data over its lifetime.

If you are reading this list and thinking "most of that is already in my MDR file". Yes, that is the point.

A worked example

A Class IIb diabetic retinopathy screening tool, CE-marked under MDR, now under the AI Act high-risk regime.

The overlap. The MDR file already contains: - Risk management (EN ISO 14971) including the hazard "false negative on referable retinopathy" and its mitigation. - Training and validation data description in the technical documentation section on design and performance. - Clinical evaluation with sensitivity and specificity on an independent external validation set. - Cybersecurity per EN IEC 81001-5-1:2022. - Software lifecycle per EN 62304:2006+A1:2015. - Post-market surveillance plan with drift monitoring and performance tracking. - Instructions for use describing intended purpose, intended users, performance, and limitations.

Of the AI Act high-risk obligations, the following are already present in substance: risk management system, technical documentation, parts of data governance, cybersecurity, post-market monitoring, user information.

The delta. What the existing MDR file probably does not fully cover: - Dedicated data governance documentation. A standalone section describing training, validation, and test data lineage, representativeness analysis, subgroup coverage, and bias assessment. Much of this exists in scattered form; the AI Act wants it pulled together. - Human oversight design. The MDR risk file will mention that a clinician reviews the output. The AI Act wants a specific design justification showing that the system is designed to support meaningful oversight. Interpretable outputs, confidence reporting, override mechanisms. - Logging. MDR does not explicitly require the AI system to automatically record events for traceability. The AI Act does. This is often a gap in early-stage products. - Transparency obligations to end users phrased in AI Act terms. Not just IFU content, but a specific statement about capabilities and limitations aligned with the AI Act's framing.

The work. Two weeks of focused documentation effort. One new section in the technical file for data governance. One design review adding explicit logging. One review of the IFU to align transparency language with the AI Act. Updated PMS plan references. That is usually the full delta for a well-run MDR submission.

What it is not: a new QMS, a new technical file, a new notified body relationship, or a parallel compliance operation.

The Subtract to Ship playbook

Step 1. Inventory what you already have. Walk through the AI Act high-risk obligation list and mark, for each, where in your MDR file it is already addressed. For most founders this is a surprising exercise. 60 to 80 percent is already there.

Step 2. Build a delta list. The obligations not covered by your existing MDR documentation are your work list. Keep it honest. Do not mark something as covered because a single sentence in the IFU mentions it. Does the auditor have what they need to verify?

Step 3. Data governance is usually the biggest gap. Write a dedicated data governance section. Training set description (source, size, subgroup breakdown). Validation and test set description. Representativeness analysis. Does your data reflect the intended population. Bias assessment. What subgroups might be underserved. Data preparation steps. Known gaps and how you handle them. This is the document the AI Act most clearly wants, and it is the one most founders have not written.

Step 4. Human oversight is a design question, not a label. Document the design features that enable meaningful clinician oversight. Interpretable output (what does the score mean). Confidence or uncertainty reporting where appropriate. The explicit statement of the clinician's role in the workflow. How the output is presented. Why the clinician can override.

Step 5. Logging is a product decision. Decide what events the system records for traceability and performance monitoring. Input fingerprint, output, model version, timestamp, clinician action. This data feeds your PMS. Build it once, use it forever.

Step 6. Align the QMS. Under EN ISO 13485:2016+A11:2021 you already have a QMS. Add AI-specific procedures where needed. Training data management, model version control, significant-change assessment for AI updates. Do not create a second QMS.

Step 7. Confirm with your notified body. Ask which AI Act items they are authorised to assess alongside MDR. Where there are gaps, ask how they want those handled.

The Subtract to Ship instinct: the AI Act is an overlay, not a rebuild. Resist any vendor, consultant, or internal voice that wants to sell you a parallel track.

Reality Check

  1. Have you mapped every AI Act high-risk obligation to a specific section of your existing MDR technical documentation?
  2. Do you have a dedicated data governance document describing training, validation, and test data, including representativeness and bias?
  3. Is human oversight documented as a design justification, not just a sentence in the IFU?
  4. Does your system record events sufficient for traceability and post-market monitoring of AI performance?
  5. Does your PMS plan explicitly cover AI-specific monitoring. Drift, performance degradation, subgroup fairness?
  6. Is your QMS one QMS with AI additions, or have you accidentally started building two?
  7. Has your notified body confirmed which AI Act items they will assess as part of your MDR conformity assessment?
  8. Do you have a single one-page dual-classification memo. MDR class and AI Act category. With reasons?

Frequently Asked Questions

If I already have CE marking under MDR, am I automatically compliant with the AI Act? No. Existing CE marking under MDR does not confer AI Act conformity. Most of the work overlaps, but there is always a delta, and that delta has to be addressed.

Does the AI Act require a separate notified body? The intent is that where a notified body is already involved under sectoral legislation like MDR, that notified body covers the AI Act requirements within its authorisation. Check with your specific notified body on scope and readiness.

What is the single biggest gap most MedTech startups have? Data governance documentation. Training data is usually described in scattered pieces across the technical file and never pulled into a single coherent account. The AI Act wants it coherent.

Do I need a separate AI Act QMS? No. You integrate AI Act obligations into your existing EN ISO 13485 QMS. Creating a parallel QMS is the most expensive mistake you can make.

When does the AI Act start applying to my medical AI product?

How does AI Act post-market monitoring differ from MDR PMS? Much of it overlaps. The practical answer is to extend your existing MDR PMS plan with AI-specific items (drift, subgroup performance, model degradation) rather than building a separate monitoring system.

Sources

  1. Regulation (EU) 2017/745 on medical devices, consolidated text. Article 10, Article 83, Annex I, Annex II, Annex VIII Rule 11.
  2. Regulation (EU) 2024/1689. The AI Act.
  3. EN ISO 13485:2016+A11:2021. Medical devices. Quality management systems.
  4. EN ISO 14971:2019+A11:2021. Medical devices. Application of risk management.
  5. EN 62304:2006+A1:2015. Medical device software. Software life cycle processes.
  6. EN IEC 81001-5-1:2022. Health software and health IT systems safety, effectiveness and security.