AI and machine learning components in medical devices are regulated under MDR the same way any other software function is regulated: by intended purpose, classification under Annex VIII Rule 11, and the software lifecycle requirements of Annex I Section 17. From 2025 onward, the EU AI Act (Regulation (EU) 2024/1689) adds a second regulatory layer on top of MDR for AI systems used in safety-critical contexts — including many medical devices. The MDR layer is mature and well understood. The AI Act layer is still settling. In 2026, founders have to build for both, know which rules come from which Regulation, and stay honest about what is settled and what is not.
By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.
TL;DR
- An AI-enabled medical device is a medical device first. MDR applies in full. The intended purpose test from Article 2(1) is the entry point, regardless of whether the function is deterministic code or a trained model.
- Most AI medical devices fall under Annex VIII Rule 11. Rule 11 pushes software that drives diagnostic or therapeutic decisions into Class IIa, IIb, or III depending on the severity of the decision. Very few AI medical devices are Class I.
- The EU AI Act (Regulation (EU) 2024/1689) layers additional obligations on top of MDR for AI systems in safety-critical use. Where an AI system is both a medical device under MDR and a high-risk AI system under the AI Act, both Regulations apply simultaneously.
- Locked algorithms are fully compatible with MDR today. Continuously learning algorithms that change after placing on the market are not yet supported by a clean regulatory pathway in the EU. Most devices ship locked or with controlled update cycles.
- Training data quality, bias testing, clinical evaluation tailored to AI-specific failure modes, and post-market drift detection are the four places where AI medical devices differ most from traditional SaMD in practice.
- What is settled: MDR applies, Rule 11 applies, EN 62304 lifecycle applies, EN ISO 14971 risk management applies. What is still settling: how Notified Bodies assess training data governance, the practical interface between MDR and the AI Act, and the treatment of post-market model updates.
The customer who can no longer find regulatory staff
Tibor tells a story about a customer of his second company, Flinn.ai, that captures where this whole field is right now. The customer is a medical device manufacturer with an active vigilance database. For years, two people did nothing but sit with Excel sheets and read through incoming complaints and safety reports, line by line, categorising each one manually. It was accurate work. It was also miserable work, and both of those people eventually quit because they could not face another year of scrolling through spreadsheets.
The customer installed Flinn.ai to pre-categorise the reports. The AI reads the incoming text, classifies it against the vigilance taxonomy, and flags the reports that need human attention. The two regulatory affairs staff the customer still has report that the AI saves them roughly eighty percent of the time they used to spend on first-pass categorisation. They can focus on the harder calls, the ambiguous cases, the ones that actually need human judgment.
There is a new risk. Tibor is clear about it. After the AI is right ten times in a row, the humans start trusting it. After twenty, they stop reading carefully. The complacency problem is not hypothetical — it is the failure mode most likely to bite a regulatory team that has integrated AI into its workflow. The solution is not to remove the AI. It is to build the process around the fact that the AI will eventually be wrong, and the humans still own the outcome.
That story is in this post for a reason. The field of AI medical devices is not only about new products. It is also about the regulatory work around those products changing. The founders reading this post are building AI into their devices at the same time their regulatory colleagues are using AI to assess those devices. Both sides of that equation sit under the same Regulations. Both sides have the same complacency risk. And both sides are in a period where the rules are partly settled and partly not.
The AI medical device landscape in 2026
Here is the state of play at the time of writing.
The MDR layer is mature. Regulation (EU) 2017/745 has been the binding regulation for medical devices in the EU since 26 May 2021. It does not treat AI as a special category. An AI-enabled medical device is a medical device, and MDR applies to it the same way MDR applies to any other device that happens to contain software. The qualification test from Article 2(1) is the entry point. If the product is intended by the manufacturer for a medical purpose — diagnosis, prevention, monitoring, prediction, prognosis, treatment, or alleviation of disease, among the other categories listed in Article 2(1) — it is a medical device. The underlying implementation, whether it is a classical rule-based algorithm or a neural network, does not change that answer.
The AI Act layer is newer. The EU AI Act (Regulation (EU) 2024/1689) was adopted in 2024 and is being phased in over several years. It is a horizontal Regulation that applies across sectors to AI systems defined by its own criteria, with a risk-tiered structure: prohibited uses, high-risk systems, limited-risk systems, and minimal-risk systems. AI systems that are safety components of products already covered by specific EU harmonisation legislation — including medical devices under MDR — fall into the high-risk tier by design. Where an AI system is both a medical device under MDR and a high-risk AI system under the AI Act, both Regulations apply simultaneously, and the AI Act expects its obligations to be integrated into the existing MDR conformity assessment where possible, rather than duplicated as a parallel process.
We are being careful here. The interaction between the AI Act and sectoral Regulations like MDR is a genuinely evolving area. The AI Act text sets out the general principle that sectoral conformity assessment under MDR should be used as the channel for AI Act compliance in medical devices, but the detailed operational guidance is still being worked out between the European Commission, the Medical Device Coordination Group, Notified Bodies, and AI Act governance bodies. Anyone telling you the interface is fully settled in 2026 is overstating what is actually known. What is settled is that both Regulations apply. What is still being clarified is the exact operational mechanics of how an MDR-certified Notified Body assesses AI Act obligations that sit outside the traditional medical device domain — training data governance, transparency obligations, human oversight requirements, and the technical documentation set specific to AI systems.
For a founder in 2026, the honest answer is this. Build the device for MDR compliance properly — that is the stable ground. Track the AI Act obligations in parallel as a separate set of requirements to fold in. Expect the exact integration with your Notified Body's assessment process to be clarified as your project progresses, and expect to adjust. Do not pretend either Regulation is optional, and do not pretend the interface is finished.
AI as SaMD under MDR
Software as a Medical Device — the term that covers standalone software intended to perform a medical function without being part of a hardware medical device — is the category most AI medical devices fall into. We have a dedicated post on what SaMD means under MDR that covers the definitional work; this section focuses on the AI-specific angles.
MDCG 2019-11 Rev.1 (June 2025) is the guidance document that governs qualification and classification of software in the MDR context. It does not distinguish between AI-based software and classical software at the qualification level. The question is always the same: does the software perform a function that meets the medical device definition in Article 2(1)? If yes, it is a medical device; if no, it is not. An AI model that takes an ECG as input and returns a risk score for atrial fibrillation is a medical device. An AI model that takes a patient's text messages and turns them into a note for the doctor to read later, without making any medical claim, is probably not.
Once the software qualifies as a medical device, the classification rules in Annex VIII apply. Rule 11 is the rule that matters most for SaMD, and for AI in particular, because most AI medical devices exist precisely because they drive decisions that previously required a human clinician.
Classification: Rule 11 is where most AI medical devices land
Annex VIII Rule 11 classifies software intended to provide information that is used to take decisions with diagnosis or therapeutic purposes. The default under Rule 11 is Class IIa. The class moves up to IIb if the decisions can cause serious deterioration of a person's state of health or a surgical intervention. The class moves up to III if the decisions can cause death or an irreversible deterioration of a person's state of health. Software intended to monitor physiological processes lands at Class IIa or IIb depending on the criticality of the parameters being monitored. Other software — the rare residual category — is Class I.
The practical effect for AI medical devices is that Class I is very rare. Most AI products in MedTech are built precisely to support or drive clinical decisions, and that places them in IIa at a minimum. A diagnostic decision-support tool for routine conditions is typically IIa. A diagnostic decision-support tool for conditions where the wrong decision can cause serious harm — oncology, emergency triage, critical care — climbs to IIb. An AI-driven therapy control system where the wrong decision can kill someone is Class III.
This matters because Class IIa and above require Notified Body involvement in the conformity assessment under MDR Article 52 and the corresponding annexes. No AI medical device at these classes self-certifies. You will work with a Notified Body. The Notified Body will assess your technical documentation, your QMS, and your clinical evaluation.
We have a deeper post on classification of AI and ML software under Rule 11 that walks through the class boundaries with examples.
The AI Act layer, honestly
The AI Act is a genuinely new layer. It is not a replacement for MDR. It does not remove any MDR obligation. It adds a set of horizontal obligations that apply to AI systems regardless of the product category they sit in, and then it tells the sectoral Regulations how to absorb those obligations into their existing conformity assessment process where possible.
The high-level obligations from the AI Act that matter most for medical device founders are the ones around training data quality, documentation of the AI system's design and development, transparency to users about the fact that they are interacting with an AI system, human oversight requirements appropriate to the use context, robustness and accuracy testing, and post-market monitoring of the AI system's performance. These are not new ideas. Most of them map onto obligations that already exist under MDR for software in one form or another — the risk management process in EN ISO 14971 already pushes you toward thinking about model failure modes, the clinical evaluation required under MDR Article 61 already pushes you toward demonstrating accuracy in the intended use population, and the post-market surveillance obligations under MDR Articles 83-86 already push you toward monitoring real-world performance. The AI Act adds specificity and formalism around each of these areas, and it adds some genuinely new requirements around training data documentation and transparency.
We are not going to cite specific AI Act article numbers in this post because we do not want to misquote a Regulation that is still being operationalised and where specific article references are not in our verified ground truth catalog. We have a dedicated post on the interaction between the EU AI Act and MDR that will be updated as the practical guidance solidifies and as we confirm the specific provisions against the official text. For now, the principle to take away is this: if you are building an AI medical device, plan for two Regulations, not one, and expect to read the AI Act carefully against your specific product.
Locked algorithms versus adaptive algorithms
This is the single topic where AI medical devices diverge most sharply from traditional SaMD.
A locked algorithm is one that does not change after it is placed on the market, or that only changes through controlled update releases that go through the manufacturer's change management process. Every update is a defined event. Every update can be risk-assessed, clinically evaluated where needed, and documented. The locked-algorithm model fits cleanly into the MDR framework because MDR is built around the assumption that a device has a defined configuration at the point of placing on the market, and that significant changes trigger a re-assessment. Locked algorithms ship under MDR today without particular difficulty.
A continuously adaptive algorithm — one that updates its weights automatically as it encounters new data in the field, without a discrete release event — is a different case. The MDR framework does not contain a clean provision for devices whose behaviour changes silently between certification audits. A device that is Class IIa on the day of certification and that retrains itself weekly is technically a different device each week, and the regulatory status of those weekly changes is not cleanly addressed in the current MDR text. This is one of the open questions in the field, and it has been discussed between regulators and industry for years.
In 2026, the practical consensus is that most AI medical devices ship locked or with a controlled predefined change plan that specifies in advance how and when the algorithm can update, with each class of change pre-assessed. The idea of a fully autonomous continuously-learning algorithm operating without a defined change control envelope is not currently supported by a clean CE marking pathway in the EU. Founders building adaptive systems should expect to define their change envelope up front, commit to it in the technical documentation, and design update cycles that can be audited by the Notified Body.
We cover this topic in more depth in our post on locked versus adaptive AI algorithms under MDR.
Training data, bias, and representativeness
The training data is the product, in a sense that classical software never experienced. A bug in classical software comes from a line of code. A bug in an AI medical device often comes from a data distribution — the system works correctly on the patients who look like the training set and fails on the patients who do not.
Under MDR, the obligation to manage this risk already exists. Annex I Section 17 sets out the specific requirements for software. It requires software to be developed and manufactured in accordance with the state of the art, taking into account the principles of development life cycle, risk management, and information security. It also requires that software intended to be used in combination with mobile computing platforms must be designed with the characteristics of the platform in mind. These requirements apply to AI medical device software the same way they apply to any other software. Training data governance is where a risk-based reading of Annex I Section 17 meets the risk management obligations of EN ISO 14971:2019+A11:2021.
EN ISO 14971 is the harmonised standard for risk management for medical devices. For AI, the relevant failure modes include bias in training data (the system is systematically worse for a subgroup of patients), distribution shift (the population the device sees in the field differs from the training population), adversarial robustness (the system fails on inputs that look normal to a human but trip the model), and explainability gaps (the clinician cannot tell why the model made a particular prediction, which affects appropriate use). Each of these is a hazard that has to be identified, evaluated, and controlled under the ISO 14971 process.
EN 62304:2006+A1:2015 is the software lifecycle standard referenced by MDR Annex I Section 17.2. It was written before the AI era and does not have an AI-specific track, but its core discipline — requirements, architecture, unit, integration, and system testing with defined acceptance criteria — still applies. AI development teams in 2026 run a lifecycle that combines EN 62304 process discipline with AI-specific data management practices (dataset versioning, distribution monitoring, test set isolation) that are not in EN 62304 itself but are expected by every competent Notified Body.
The AI Act adds specificity on training data quality — representativeness, relevance to the intended use population, documentation of provenance, and bias mitigation. For an MDR project, the honest move in 2026 is to treat the AI Act's data governance expectations as the direction of travel and to build a data governance file that would satisfy an auditor looking at either Regulation.
Clinical evaluation for AI
Clinical evaluation under MDR Article 61 and Annex XIV does not change conceptually for AI medical devices. The manufacturer still has to demonstrate that the device performs as intended and that the benefits outweigh the risks. What changes is what that demonstration has to include.
For AI, the clinical evaluation has to address: accuracy on the intended use population with the same demographic and clinical distribution the device will see in the field; performance in subgroups where the risk of bias exists; failure mode characterisation (where does the model fail, and how does it fail — silently or loudly); and the interaction with the clinician in the loop (if the device is decision-support, how does the clinician's trust and workflow affect the net outcome, because a perfect model that clinicians ignore is a worse product than an 80% model that clinicians use correctly).
The evidence sources are the same as for any other device: literature, equivalence, and clinical investigation. For AI, literature is rarely sufficient on its own because the specific model is new. Equivalence is difficult because two AI models with the same intended purpose can behave very differently. Clinical investigation, or at least a retrospective performance study on an independent dataset, is usually part of the clinical evidence package. We have a full post on clinical evaluation for AI and ML medical devices that walks through the evidence expectations.
Post-market surveillance and drift detection
Post-market surveillance is where AI medical devices need a discipline that traditional devices do not. The reason is drift.
A classical device deployed in the field does not change its behaviour because the field changed. An AI device deployed in the field can effectively change its behaviour — not because the model changed, but because the distribution of inputs drifted. A diagnostic model trained on one hospital's patient mix that gets deployed in a different hospital may degrade silently. Seasonal disease patterns shift the input distribution. New imaging hardware changes the pixel statistics. The model has not moved, but its effective accuracy has.
MDR Articles 83-86 require every manufacturer to have a PMS system proportionate to the risk class and appropriate for the device. MDCG 2025-10 (December 2025) describes what this looks like in practice. For an AI medical device, an appropriate PMS system has to include drift detection: monitoring of input distributions, monitoring of model outputs, monitoring of clinical outcomes where possible, and a mechanism to detect and respond to degradation before it causes harm.
This is where the Flinn.ai complacency story applies back to the device side of the equation. If the PMS system exists on paper but is not actively watched, the drift happens silently and the first indication is an incident report from the field. A good PMS system for an AI medical device is instrumented, automatic where possible, and has a human-in-the-loop review cadence that does not depend on the good mood of an overworked RA manager. We have a full post on post-market surveillance for AI medical devices that covers the operational patterns.
EN IEC 81001-5-1:2022 — the harmonised standard for cybersecurity activities across the health software lifecycle — is relevant here too, because AI systems introduce their own cybersecurity risks (model theft, adversarial inputs, data poisoning during updates) that belong in the same security lifecycle as every other software threat.
The Subtract to Ship approach for AI MedTech
Everything in this blog comes back to the Subtract to Ship framework for MDR. For AI medical devices, the four passes apply with the same discipline.
The Purpose Pass asks whether the AI feature has to be a medical device at all. Not every AI feature built into a MedTech product is a medical device. An AI that generates marketing copy for the product website is not. An AI that helps the RA team categorise internal documents is not. An AI that produces a diagnostic output for a clinician is. The Purpose Pass draws the line and cuts the AI features that do not need regulatory scope out of the regulated product. Scoping this correctly often reduces the regulated surface of the product dramatically.
The Classification Pass walks through Rule 11 carefully. Not every AI decision-support feature is IIb. The severity of the decision the information is used to make is the driver. Being precise about what the device actually does — diagnosis versus screening, therapy versus information, critical condition versus routine — can legitimately move the class one level and save significant conformity assessment cost.
The Evidence Pass asks what the minimum defensible clinical evidence looks like. For AI, this usually means a retrospective performance study on an independent, representative dataset, combined with targeted literature, combined with — where the risk class demands it — a prospective study. It also means a training data governance file that is thorough enough to withstand scrutiny from both an MDR auditor and an AI Act auditor, without duplicating work.
The Operations Pass asks what the minimum QMS and PMS system looks like for an AI medical device specifically. The answer is a QMS that includes AI-specific processes — dataset governance, model versioning, drift monitoring, revalidation triggers — on top of the standard EN ISO 13485 process backbone. A startup that tries to bolt these onto a QMS template written for hardware devices will find the seams. Building them in from the start is cheaper.
For a broader view of how AI is changing the regulatory work itself — not just the devices being regulated — see our posts on Flinn.ai and AI tools transforming regulatory work and the AI advantage in regulatory affairs for startups.
Reality Check — Where do you stand?
- Can you state the intended purpose of your AI feature in one precise sentence, and can you map that sentence to a specific category in MDR Article 2(1)?
- Have you applied Annex VIII Rule 11 explicitly to your product, with the class and the sub-clause documented, or are you guessing at the class?
- Have you identified the AI-specific failure modes (bias, drift, adversarial robustness, explainability gaps) in your risk management file under EN ISO 14971, or is your risk file written as if the software were classical?
- Do you have a training data governance file that documents dataset provenance, representativeness analysis for the intended use population, and test set isolation?
- Is your algorithm locked, or do you have a defined change control envelope for updates, documented before the first conformity assessment?
- Does your clinical evaluation include performance data on the intended use population, broken down by relevant subgroups, with an independent test set?
- Does your PMS plan include active drift detection with a defined cadence and defined thresholds for escalation, or is it passive complaint handling only?
- Have you mapped the EU AI Act obligations to your product separately from the MDR obligations, so you know which Regulation each requirement comes from?
- If an auditor asked you "why is this AI safe," do you have an answer that is not "because we tested it"?
Frequently Asked Questions
Is an AI medical device regulated differently from any other medical device under MDR? No. Under MDR, an AI medical device is a medical device. The same articles, annexes, and harmonised standards apply. What changes is the practical content of the technical documentation — risk analysis has to cover AI-specific failure modes, clinical evaluation has to address accuracy on the intended use population, and post-market surveillance has to include drift detection. The Regulation does not carve out AI as a separate category.
Does the EU AI Act replace MDR for AI medical devices? No. The AI Act layers additional obligations on top of MDR. Where an AI system is both a medical device under MDR and a high-risk AI system under the AI Act, both Regulations apply simultaneously. The AI Act text sets out the principle that sectoral conformity assessment under MDR should serve as the channel for AI Act compliance in medical devices, but the detailed operational interface is still being clarified by the Commission and the Medical Device Coordination Group in 2026.
Can I ship a continuously learning AI medical device in the EU today? Not cleanly, in 2026. MDR is built around the assumption of a defined device configuration at the point of placing on the market, and significant changes trigger re-assessment. A fully autonomous continuously-learning algorithm without a defined change envelope does not fit this framework. The practical pathway today is a locked algorithm or a predefined change control plan that specifies in advance how and when updates can occur.
What class is an AI diagnostic decision-support tool under MDR? It depends on the severity of the decision the information is used to make, per Annex VIII Rule 11. A tool supporting routine diagnostic decisions is typically Class IIa. A tool supporting decisions where a wrong call can cause serious deterioration of health is Class IIb. A tool supporting decisions that can cause death or irreversible harm is Class III. Very few AI diagnostic tools are Class I.
Do I need a Notified Body for an AI medical device? Almost always yes. Because Rule 11 places most AI decision-support and monitoring software at Class IIa or higher, Notified Body involvement is required under MDR Article 52. Only the small residual category of software that does not drive or support clinical decisions can be Class I and self-certified, and most AI products do not fall there.
How does post-market surveillance differ for an AI medical device? It adds drift detection. A classical device does not change its behaviour because its environment changed. An AI device can effectively change its behaviour as the input distribution drifts. PMS for AI medical devices should include monitoring of input distributions, model outputs, and clinical outcomes where possible, with defined thresholds for escalation. MDCG 2025-10 (December 2025) describes the general PMS framework; the AI-specific overlay is the operational content.
What is settled versus unsettled about AI medical device regulation in 2026? Settled: MDR applies in full; Rule 11 applies to AI software; EN 62304 lifecycle applies; EN ISO 14971 risk management applies; Class IIa and above require Notified Body assessment. Still settling: the detailed operational interface between MDR conformity assessment and AI Act obligations; Notified Body practice on training data governance assessment; the treatment of post-market model updates beyond simple change control.
Related reading
- What Is the EU Medical Device Regulation (MDR)? — the foundation every AI MedTech founder should read first, because AI medical devices are medical devices before they are AI.
- The Subtract to Ship Framework for MDR Compliance — the methodology that runs through every post in this blog, applied here to AI MedTech.
- What Is Software as a Medical Device (SaMD) Under MDR? — the broader SaMD context that AI medical devices sit inside.
- Machine Learning Medical Devices Under MDR — the companion post that focuses specifically on ML model development under MDR discipline.
- The EU AI Act and MDR: How the Two Regulations Interact — the detailed post on the AI Act layer, updated as the operational guidance solidifies.
- Classification of AI and ML Software Under Rule 11 — the practical walk-through of Annex VIII Rule 11 for AI products.
- Locked Versus Adaptive AI Algorithms Under MDR — the open question on continuous learning and the practical paths that exist today.
- Clinical Evaluation for AI and ML Medical Devices — the evidence expectations specific to AI products.
- Post-Market Surveillance for AI Medical Devices — drift detection and the operational PMS patterns.
- Flinn.ai and AI Tools Transforming Regulatory Work — the other side of the equation: how AI is changing the work of the regulatory teams reading this post.
- The AI Advantage in Regulatory Affairs for Startups — how a small team can use AI tooling to run a regulated product without hiring an army.
Sources
- Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 2(1) (definition of medical device), Article 52 (conformity assessment procedures), Article 61 (clinical evaluation), Articles 83-86 (post-market surveillance), Annex I (GSPR, in particular Section 17 on electronic programmable systems and software), Annex VIII (classification rules, in particular Rule 11). Official Journal L 117, 5.5.2017.
- Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Referenced by name for the general framing of the AI Act layer; specific article references are intentionally not cited in this post where we could not verify them against the ground truth catalog. Founders should consult the official text on EUR-Lex.
- MDCG 2019-11 Rev.1 — Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 — MDR and Regulation (EU) 2017/746 — IVDR, October 2019, Revision 1 June 2025.
- EN 62304:2006 + A1:2015 — Medical device software — Software life-cycle processes.
- EN ISO 14971:2019 + A11:2021 — Medical devices — Application of risk management to medical devices.
- EN IEC 81001-5-1:2022 — Health software and health IT systems safety, effectiveness and security — Part 5-1: Security — Activities in the product life cycle.
- MDCG 2025-10 — Guidance on post-market surveillance of medical devices and in vitro diagnostic medical devices, December 2025.
This post is the pillar for the AI, Machine Learning and Algorithmic Devices category in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. The field moves quickly and this post will be updated as the operational interface between MDR and the EU AI Act is clarified. If your product sits at this intersection and the general framing here does not resolve your specific case, that is expected — the complexity is real, the stakes are real, and this is exactly where a sparring partner who has walked other AI MedTech founders through the same decisions earns their keep.