---
title: The EU AI Act and MDR: How Both Regulations Apply to Your AI Medical Device
description: The EU AI Act layers on top of the MDR for AI medical devices. Here is how the two regulations interact — and what startups need to know about the overlap.
authors: Tibor Zechmeister, Felix Lenhard
category: AI, ML & Algorithmic Devices
primary_keyword: EU AI Act MDR medical device
canonical_url: https://zechmeister-solutions.com/en/blog/eu-ai-act-and-mdr
source: zechmeister-solutions.com
license: All rights reserved. Content may be cited with attribution and a link to the canonical URL.
---

# The EU AI Act and MDR: How Both Regulations Apply to Your AI Medical Device

*By Tibor Zechmeister (EU MDR Expert, Notified Body Lead Auditor) and Felix Lenhard.*

> **The EU AI Act (Regulation (EU) 2024/1689) and the MDR (Regulation (EU) 2017/745) both apply to AI medical devices at the same time, not one instead of the other. MDR governs the device. Intended purpose, classification, clinical evidence, conformity assessment, post-market surveillance. The AI Act governs the AI system inside the device. Data governance, transparency to users, human oversight, robustness, and AI-specific risk management. Where an AI system is a safety component of a medical device already covered by MDR, the AI Act places it in the high-risk tier and expects its obligations to be integrated into the existing MDR conformity assessment route rather than run as a parallel process. The MDR layer is stable and well understood. The AI Act layer is still being operationalised in 2026. Founders should build for MDR first, track AI Act obligations as an additional set of requirements, and stay honest about what is settled and what is not.**

**By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.**

---

## TL;DR

- Two Regulations apply simultaneously to an AI medical device: the MDR for the device, the EU AI Act for the AI system. Neither replaces the other.
- MDR governs intended purpose (Article 2(1)), classification (Annex VIII Rule 11), software lifecycle (Annex I Section 17), clinical evaluation (Article 61), conformity assessment (Article 52), and post-market surveillance (Articles 83-86). This layer is mature.
- The AI Act (Regulation (EU) 2024/1689) adds horizontal obligations on AI systems. Training data quality, technical documentation specific to the AI system, transparency, human oversight, robustness, and post-market monitoring of model performance.
- An AI system that is a safety component of an MDR-regulated device lands in the AI Act's high-risk tier. The AI Act expects its requirements to be folded into the existing MDR conformity assessment channel where possible, not assessed twice.
- In 2026 the operational interface between a Notified Body's MDR assessment and AI Act assessment is still being clarified by the European Commission, the Medical Device Coordination Group, and Notified Bodies. Plan for both Regulations; expect the mechanics to evolve.
- The practical move for startups is to build MDR compliance properly, map the AI Act obligations in parallel as a separate requirements set, and maintain one technical documentation set that can answer questions from either Regulation without duplicated work.

---

## Why two Regulations apply at once

The situation is unusual, and it is worth being blunt about it. Before 2024, an AI medical device sat under one EU regulation: the MDR. The MDR did not care whether the software inside the device was a classical algorithm or a trained model. If the product met the definition of a medical device under Article 2(1), MDR applied, and that was the whole regulatory story on the EU side.

The EU AI Act changed that. The AI Act is a horizontal regulation. It applies across sectors to any product or system that meets its definition of an AI system, regardless of whether the product is a medical device, a recruitment tool, a credit scoring engine, or a toy. It was adopted in 2024 as Regulation (EU) 2024/1689 and is being phased in over several years.

The result, for an AI medical device, is that two Regulations now apply at the same time. MDR has not gone anywhere. Every article, every annex, every harmonised standard still binds the device. The AI Act is layered on top, with its own obligations on the AI system inside the device. Founders reading this post who want a single answer to the question "which Regulation applies to my AI diagnostic tool" are going to be disappointed. The honest answer is both.

This is not a drafting mistake. The EU deliberately chose a layered approach so that horizontal AI rules could be set once, across all sectors, without rewriting every sectoral regulation from scratch. The MDR governs the device. The AI Act governs the AI. Where those two domains overlap. And for a medical device whose core function is an AI model, they overlap almost entirely. Both apply.

## MDR governs the device

The MDR layer is the stable ground. For an AI medical device, the MDR does the same work it does for any other device.

Article 2(1) of Regulation (EU) 2017/745 is the qualification test. If the product is intended by the manufacturer for one of the medical purposes listed. Diagnosis, prevention, monitoring, prediction, prognosis, treatment, alleviation of disease, among the other categories named. It is a medical device. The implementation is irrelevant to the test. A rule-based clinical decision support tool and a deep learning model that performs the same function are both medical devices if the intended purpose matches Article 2(1).

Annex VIII Rule 11 classifies the software. For AI products, Rule 11 is the rule that matters almost every time. Software intended to provide information used to make decisions with diagnostic or therapeutic purposes is at least Class IIa, with escalation to IIb for decisions that can cause serious deterioration of health or surgical intervention, and to Class III for decisions that can cause death or irreversible deterioration. Monitoring software lands at IIa or IIb depending on the criticality of the parameters. We cover Rule 11 in depth in the pillar post on [AI medical devices under MDR](/blog/ai-medical-devices-mdr-regulatory-landscape) and in the dedicated [Rule 11 classification post](/blog/classification-ai-ml-software-rule-11).

Annex I Section 17 sets the general safety and performance requirements for software. Software has to be developed and manufactured in accordance with the state of the art, taking into account the principles of development life cycle, risk management, and information security. EN 62304:2006+A1:2015 is the harmonised standard that operationalises this for software lifecycle; EN ISO 14971:2019+A11:2021 is the harmonised standard for risk management.

Article 52 routes the device through conformity assessment. For Class IIa and above, a Notified Body is involved. Article 61 and Annex XIV require clinical evaluation. Articles 83-86 require post-market surveillance proportionate to the risk class. The MDR framework is complete for an AI medical device as a medical device. Every AI medical device founder has to get this layer right before worrying about anything else.

## The AI Act governs the AI system

The AI Act layer is newer and adds a different kind of obligation. Rather than ask "is this a device that cures disease," the AI Act asks "is this an AI system, and what risk tier does it fall into?"

We are being careful in this section. The AI Act is Regulation (EU) 2024/1689. It is not in our verified regulatory ground truth catalog the way the MDR, MDCG guidance, and harmonised standards are. That means we will describe the AI Act's general framing and named principles but will not cite specific article numbers, annexes, or paragraph references. Founders who need to pin a specific obligation to a specific provision should read the official text on EUR-Lex directly.

The AI Act's risk-tier structure is well known. Prohibited uses are forbidden outright. High-risk AI systems are subject to the bulk of the AI Act's substantive obligations. Limited-risk systems face lighter transparency duties. Minimal-risk systems are largely unregulated by the Act.

An AI system that is a safety component of a product already covered by EU harmonisation legislation. And the MDR is named in the AI Act as one of those pieces of legislation. Falls into the high-risk tier. For the medical device category, this means that any AI system that is, or is part of, an MDR-regulated device is high-risk under the AI Act by design. There is no "low-risk AI medical device" category under the AI Act for products that sit under MDR.

The substantive obligations the AI Act places on high-risk AI systems cluster around a recognisable set of themes, all described here in general terms: a risk management system specific to the AI system's failure modes; quality and governance of training, validation, and test datasets, including representativeness and bias considerations; technical documentation describing the AI system's design, development, and performance characteristics; record-keeping and logging sufficient to trace the system's behaviour; transparency to deployers and users about the fact that they are interacting with an AI system and about its capabilities and limitations; human oversight appropriate to the use context; accuracy, robustness, and cybersecurity of the system; and post-market monitoring of the AI system's real-world performance.

Every one of those themes has a counterpart in MDR obligations. None is identical. The AI Act adds specificity where MDR speaks generally, and it adds some genuinely new expectations. Particularly around training data governance and AI-specific transparency. That MDR does not explicitly contain.

## High-risk AI systems and the overlap with MDR classes

It is worth being clear about one point that confuses many founders. The AI Act's "high-risk" tier is not the same as the MDR's Class III. They are two different risk taxonomies answering two different questions.

MDR classes (I, IIa, IIb, III) measure the risk of the device to the patient, based on intended purpose and the consequences of failure. The class drives the conformity assessment route and the depth of Notified Body involvement.

The AI Act's tiers (prohibited, high-risk, limited-risk, minimal-risk) measure the risk of the AI system across sectors and use cases, based on the nature of the AI system and its context of use.

For an AI medical device, the class under MDR could be IIa, IIb, or III depending on Rule 11 analysis. Under the AI Act, the same device is high-risk regardless of its MDR class, because the fact that it is a safety component of a medical device is what puts it in the high-risk tier. A Class IIa AI decision-support tool and a Class III AI therapy control system both sit in the AI Act's high-risk tier. The AI Act does not subdivide medical device AI systems by MDR class.

This matters for technical documentation planning. The depth of MDR Notified Body scrutiny scales with MDR class. The AI Act's obligations do not scale the same way. A Class IIa AI medical device and a Class III AI medical device face broadly the same AI Act obligations, even though their MDR documentation burdens differ substantially.

## Where the two regulations reinforce each other

The good news for founders is that most of the AI Act's obligations on AI medical devices overlap with work that a competent MDR project is already doing.

The AI Act's risk management expectation for high-risk AI systems overlaps substantially with EN ISO 14971:2019+A11:2021 risk management under MDR Annex I. An AI-specific risk file that identifies model failure modes. Bias, distribution shift, adversarial robustness, explainability gaps. And controls them already serves both Regulations.

The AI Act's technical documentation expectation overlaps with MDR Annex II technical documentation. A technical file that describes the AI system's design, training approach, validation results, and intended operating envelope answers questions from both sides.

The AI Act's accuracy, robustness, and testing expectations overlap with the clinical evaluation required under MDR Article 61 and the software verification and validation required under EN 62304:2006+A1:2015. A clinical evaluation that reports performance on the intended use population and documents subgroup analysis answers both frameworks.

The AI Act's post-market monitoring of real-world AI performance overlaps with MDR post-market surveillance under Articles 83-86 and MDCG 2025-10 (December 2025). A PMS system that includes drift detection and model performance monitoring serves both.

The AI Act's own text sets out the principle that sectoral conformity assessment under legislation like MDR should be used as the channel for AI Act compliance where possible, so that manufacturers do not face two parallel assessment processes. This is the reinforcement move. One conformity assessment, done properly, satisfies both Regulations.

## Where the two regulations create new obligations

Some parts of the AI Act do not have a clean MDR counterpart, and these are the areas where founders will feel new work.

Training data governance is the clearest example. MDR speaks about software lifecycle and risk management, but it does not contain a dedicated obligation on documenting dataset provenance, representativeness for the intended use population, labelling quality, and bias mitigation measures. A competent clinical evaluation for an AI medical device already addresses some of this in practice, but the AI Act formalises training data governance as a named expectation in its own right.

Transparency obligations to users about the fact that the system is AI-based, and about its capabilities and limitations, are more prescriptive than MDR's labelling and instructions-for-use requirements. MDR requires clear instructions; the AI Act adds specific transparency content about the AI nature of the system.

Human oversight as a named, designed-in feature of the system is not phrased that way in MDR. MDR expects the device to be safe and performant in its intended use, which implicitly covers the role of the clinician in the loop. The AI Act asks the manufacturer to describe and design the human oversight mechanism explicitly.

Logging and record-keeping of the AI system's behaviour during operation is more prescriptive under the AI Act than the general record-keeping obligations of the MDR QMS.

None of these are exotic. A well-run AI medical device project in 2026 should be doing most of this anyway, whether or not the AI Act obliges it. The shift is that the AI Act makes these expectations explicit and binding rather than leaving them to good engineering practice.

## What is still evolving in 2026

The detailed operational interface between MDR conformity assessment and AI Act obligations is the area most in flux. Several questions are being worked out between the European Commission, the Medical Device Coordination Group, AI Act governance bodies, and Notified Bodies.

How does a Notified Body currently designated under MDR take on the assessment of AI Act-specific obligations that sit outside the traditional medical device regulatory domain? Training data governance, for example, is not something MDR Notified Body auditors have traditionally assessed at depth.

How is the AI Act's technical documentation expectation merged with the MDR Annex II technical documentation structure in practice? Does the manufacturer produce one technical file with an AI annex, two files, or something in between?

How do post-market model updates. Which are a live topic under MDR change control already, without a fully settled answer for adaptive algorithms. Interact with AI Act post-market monitoring obligations?

These questions have principled answers in the AI Act text and in the MDR framework, and the direction of travel is clear: one documentation set, one conformity assessment channel, integrated obligations. The detailed mechanics are being clarified as practice builds up.

## Honest limits of current knowledge

We want to be direct about what this post is and is not. We have not cited specific AI Act article numbers, annexes, or paragraph references. That is deliberate. The project rule is that every regulatory claim must be verifiable against a source document in our ground truth catalog, and the AI Act is not in that catalog. We can describe the AI Act's general framing and named principles from its public identity as Regulation (EU) 2024/1689, but we will not put specific provisions on the page if we cannot verify them against the official text.

Founders who need pinpoint references for a specific AI Act obligation should read the official text directly on EUR-Lex, or work with a regulatory expert who has that verification in front of them. The MDR references in this post are verified against the consolidated text of Regulation (EU) 2017/745. The AI Act references are framing only.

This honesty is not a weakness. It is the standard the rest of this blog holds itself to, applied to a Regulation we are still in the process of verifying in full.

## The Subtract to Ship angle: compliance efficiency across both

The [Subtract to Ship framework for MDR](/blog/subtract-to-ship-framework-mdr) has a direct application here. The instinct in the face of two overlapping Regulations is to run two parallel tracks. One MDR workstream, one AI Act workstream, each with its own documentation set, its own review process, its own governance. The instinct is wrong. It doubles the work without doubling the quality, and it creates seams where obligations fall between tracks.

The Subtract move is to maintain a single technical documentation set, a single risk management file, a single clinical evaluation, and a single PMS system, and to tag each section with the Regulation it answers to. Where MDR and AI Act overlap on a topic, one section of documentation answers both. Where only MDR applies, the section is tagged MDR-only. Where only the AI Act applies. Training data governance, AI-specific transparency content, explicit human oversight description. The section is tagged AI-Act-only.

The result is a leaner documentation set than two parallel tracks would produce, and it is the documentation set that Notified Bodies and AI Act reviewers will actually want to see. Duplicate work does not add safety. Integrated work does.

The Purpose Pass of Subtract to Ship applies here with particular force: not every AI feature in a MedTech product is a medical device under MDR, and not every AI feature is a high-risk AI system under the AI Act. A feature that generates internal marketing copy is neither. A feature that produces a diagnostic output for a clinician is both. Scoping carefully at the start determines how much of your product is covered by this dual-regulation problem and how much can be kept out of scope entirely.

## Reality Check. Where do you stand?

1. Have you identified, in writing, which parts of your product are AI systems under the likely AI Act definition and which are not?
2. Have you confirmed that your AI system is a safety component of an MDR-regulated device, and therefore high-risk under the AI Act by design?
3. Is your MDR technical documentation complete as if the AI Act did not exist. The stable MDR ground that everything else builds on?
4. Do you have a training data governance file that documents dataset provenance, representativeness for the intended use population, labelling quality, and bias mitigation?
5. Is the human oversight mechanism in your product a designed feature described in your documentation, or an implicit assumption about how clinicians will use the system?
6. Do your transparency materials. Labelling, IFU, user-facing content. State clearly that the system is AI-based and describe its capabilities and limitations?
7. Is your risk management file, built under EN ISO 14971:2019+A11:2021, explicit about AI-specific failure modes (bias, drift, adversarial robustness, explainability gaps)?
8. Does your post-market surveillance plan include monitoring of AI system performance (drift detection, subgroup performance tracking) on top of standard complaint handling?
9. Have you mapped every documentation section to the Regulation it answers. MDR, AI Act, or both. So that neither an MDR auditor nor an AI Act reviewer finds a gap?

## Frequently Asked Questions

**Does the EU AI Act replace MDR for AI medical devices?**
No. The AI Act layers additional obligations on top of MDR. MDR still governs the device. Intended purpose, classification under Annex VIII Rule 11, clinical evaluation, conformity assessment, and post-market surveillance. The AI Act adds horizontal obligations on the AI system inside the device. Both Regulations apply simultaneously.

**Is every AI medical device high-risk under the EU AI Act?**
Under the AI Act's framing, an AI system that is a safety component of a product covered by existing EU harmonisation legislation. Including the MDR. Falls into the high-risk tier. For practical purposes, an AI system that is part of an MDR-regulated medical device is high-risk under the AI Act regardless of its MDR class. There is no low-risk AI medical device category for products that sit under MDR.

**Do I need a separate conformity assessment for the EU AI Act on top of my MDR conformity assessment?**
The AI Act text sets out the principle that sectoral conformity assessment routes. Like the MDR conformity assessment under Article 52. Should be used as the channel for AI Act compliance in medical devices where possible, rather than running as a parallel process. The detailed operational mechanics of how a Notified Body integrates AI Act obligations into its MDR assessment are still being clarified in 2026. The practical expectation is one integrated assessment, not two.

**What does the EU AI Act add that MDR does not already require?**
The main additions are explicit training data governance obligations (dataset provenance, representativeness, bias mitigation), prescriptive transparency to users about the AI nature of the system, explicit designed-in human oversight, and AI-specific logging and record-keeping. Most of these overlap with good engineering practice for AI medical devices, but the AI Act formalises them as binding expectations.

**How should a startup document compliance with both MDR and the AI Act without doubling the work?**
Maintain one integrated technical documentation set, one risk management file, one clinical evaluation, and one PMS system, with each section tagged for the Regulation it answers to. Where MDR and AI Act overlap, the same section satisfies both. Where only one applies, the section is tagged accordingly. Running two parallel documentation tracks doubles the work without adding quality and creates seams where obligations fall between tracks.

**Is the interface between MDR and the EU AI Act fully settled in 2026?**
No. The high-level principle. Sectoral conformity assessment channels the AI Act's medical device obligations. Is in the AI Act text. The detailed operational mechanics are still being clarified by the European Commission, the Medical Device Coordination Group, and Notified Bodies. Founders should build MDR compliance properly as the stable ground, track AI Act obligations in parallel as a separate requirements set, and expect the interface details to evolve over the next 1-2 years.

## Related reading

- [AI in Medical Devices Under MDR: The Regulatory Landscape in 2026](/blog/ai-medical-devices-mdr-regulatory-landscape) – the pillar post for this category, covering the full MDR landscape for AI products and introducing the AI Act layer at a high level.
- [Machine Learning Medical Devices Under MDR](/blog/machine-learning-medical-devices-mdr) – the companion post focusing on ML model development under MDR discipline.
- [Classification of AI and ML Software Under Rule 11](/blog/classification-ai-ml-software-rule-11) – the practical walk-through of Annex VIII Rule 11 for AI products.
- [Locked Versus Adaptive AI Algorithms Under MDR](/blog/locked-vs-adaptive-ai-algorithms-mdr) – the open question on continuous learning and the change-control envelope today.
- [Clinical Evaluation for AI and ML Medical Devices](/blog/clinical-evaluation-ai-ml-medical-devices) – the evidence expectations specific to AI products, including the subgroup performance work the AI Act formalises.
- [Post-Market Surveillance for AI Medical Devices](/blog/post-market-surveillance-ai-devices) – drift detection and the operational PMS patterns that serve both MDR Articles 83-86 and AI Act post-market monitoring.
- [Training Data Governance for AI Medical Devices](/blog/training-data-governance-ai-medical-devices) – the dataset provenance, representativeness, and bias work that the AI Act formalises and MDR clinical evaluation already implies.
- [Human Oversight and AI Medical Devices](/blog/human-oversight-ai-medical-devices) – designing the clinician-in-the-loop as a named feature rather than an implicit assumption.
- [Transparency and Labelling for AI Medical Devices](/blog/transparency-labelling-ai-medical-devices) – the user-facing content that the AI Act makes prescriptive.
- [Flinn.ai and AI Tools Transforming Regulatory Work](/blog/flinn-ai-tools-transforming-regulatory) – the other side of the equation: AI tools inside the regulatory team, which sit under the same AI Act framing when used in safety-critical contexts.
- [The Subtract to Ship Framework for MDR Compliance](/blog/subtract-to-ship-framework-mdr) – the methodology applied throughout this blog, used here to keep dual-regulation documentation lean.

## Sources

1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices. Articles cited in this post: Article 2(1) (definition of medical device), Article 10 (general obligations of manufacturers), Article 52 (conformity assessment procedures), Article 61 (clinical evaluation), Articles 83-86 (post-market surveillance). Annexes cited: Annex I Section 17 (general safety and performance requirements for software), Annex VIII Rule 11 (classification of software). Official Journal L 117, 5.5.2017.
2. Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Referenced in this post by name and publication identifier for the general framing of the AI Act layer. Specific article numbers, annexes, and paragraph references are intentionally not cited because the AI Act is not yet part of this project's verified regulatory ground truth catalog. Founders should consult the official text on EUR-Lex for pinpoint references.
3. MDCG 2019-11 Rev.1. Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745. MDR and Regulation (EU) 2017/746. IVDR, October 2019, Revision 1 June 2025.
4. EN ISO 14971:2019 + A11:2021. Medical devices. Application of risk management to medical devices.
5. EN 62304:2006 + A1:2015. Medical device software. Software life-cycle processes.

---

*This post is part of the AI, Machine Learning and Algorithmic Devices category in the Subtract to Ship: MDR blog, and sits under the pillar post on [AI medical devices under MDR](/blog/ai-medical-devices-mdr-regulatory-landscape). Authored by Felix Lenhard and Tibor Zechmeister. The interface between the MDR and the EU AI Act is an evolving area, and this post will be updated as operational guidance from the European Commission, the Medical Device Coordination Group, and Notified Bodies solidifies, and as specific AI Act provisions are verified against the official text. If your product sits at this intersection and the general framing here does not resolve your specific case, that is expected. The dual-regulation problem is genuinely new, and working it through with an expert who has seen how the interface is being operationalised in live projects is often the efficient path.*

---

*This post is part of the [AI, ML & Algorithmic Devices](https://zechmeister-solutions.com/en/blog/category/ai-ml-devices) cluster in the [Subtract to Ship: MDR Blog](https://zechmeister-solutions.com/en/blog). For EU MDR certification consulting, see [zechmeister-solutions.com](https://zechmeister-solutions.com).*
