MDR Annex VIII Rule 11 classifies medical device software in three layers. The decision-making branch sends software that provides information used for diagnostic or therapeutic decisions to Class IIa by default, Class IIb if the decisions could cause serious deterioration of health or surgical intervention, and Class III if they could cause death or irreversible deterioration. The monitoring branch sends software intended to monitor physiological processes to Class IIa, or Class IIb when the parameters are vital and variations could cause immediate danger. Everything else — the catch-all — falls to Class I, but that bucket is narrow. This deep dive walks each branch, each threshold, and the real-world classification calls a SaMD startup actually faces.

By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.


TL;DR

  • Rule 11 sits in Annex VIII of Regulation (EU) 2017/745 and applies only after software has qualified as a medical device under Article 2(1). Qualification first, Rule 11 second.
  • The first sentence of Rule 11 scopes it to software intended to provide information used to take decisions. That scope is deliberately broad and covers clinical decision support, not only autonomous diagnostic software.
  • The decision-making branch has three levels: Class IIa (default), Class IIb (serious deterioration or surgical intervention), Class III (death or irreversible deterioration).
  • The monitoring branch has two levels: Class IIa (default), Class IIb (vital physiological parameters where variations could result in immediate danger). There is no Class III under the monitoring branch inside Rule 11.
  • "All other software" falls to Class I — but this catch-all is narrow, and most SaMD startups who aim for it end up in IIa once MDCG 2019-11 Rev.1 is applied.
  • MDCG 2019-11 Rev.1 (June 2025) provides worked examples across clinical domains. Those examples are the calibration tool for edge cases.
  • Conformity assessment route under MDR Article 51 and Article 52 flows from the Rule 11 class. Getting Rule 11 right is the hinge for the rest of the project.

Why a deep dive on one classification rule

Rule 11 is short on the page and dense in consequence. Post 081 walks the rule at the entry level — enough to reach a class. This post goes further, because the edges of Rule 11 are where most startups actually live. The default case is not the hard case. The hard case is the decision-support tool that sits between IIa and IIb, the monitoring function that may or may not count as vital, the module that argues it is outside the rule entirely, and the AI feature whose "information" output is harder to pin down than a traditional score.

A founder who only knows the three levels but not the thresholds between them will over-classify or under-classify. Over-classification wastes a year of runway on a Notified Body route that was not required. Under-classification collapses at the technical file review and forces a restart at the higher class with no prior work reusable. The cost of a wrong call is measured in quarters, not weeks.

This deep dive assumes you have already read post 081 and post 371, and that your software has already qualified as MDSW under Article 2(1) through the MDCG 2019-11 Rev.1 decision tree. Qualification first, Rule 11 second — there is no way to apply Rule 11 correctly to software that has not yet been established as a medical device. If you are still at the qualification stage, post 371 and post 374 are the right starting points.

The first sentence of Rule 11 — scoping the rule itself

Rule 11 opens by establishing what kind of software it covers. In substance, and summarising the legal text because the exact wording in Annex VIII is what governs and must be read in full at the source, the rule addresses software intended to provide information which is used to take decisions with diagnostic or therapeutic purposes, and software intended to monitor physiological processes. Together those two clauses define the scope of the rule. Everything else — the "all other software" clause — is the residual that the first two clauses do not capture. (Regulation (EU) 2017/745, Annex VIII, Rule 11.)

Two phrases in the scoping sentence do the real work and are worth unpacking before any branch analysis.

"Information used to take decisions." MDCG 2019-11 Rev.1 reads "information" broadly. It covers risk scores, severity gradings, classifications, priority flags, recommended actions, interpretive annotations, thresholded alerts, and any other output the software produces that a clinician or patient treats as an input to a clinical decision. The word "used" does not require autonomous decision-making — Rule 11 covers decision support that a clinician acts on as one input among several. This is the clause that catches most SaMD. Founders who frame their product as "just providing information" are not outside the rule. They are inside its primary scope.

"Diagnostic or therapeutic purposes." These are the purposes that drag the decision-making branch in. A decision about diagnosis, about treatment selection, about dosing, about screening follow-up, about referral, about triage, about discharge — each of these is a diagnostic or therapeutic decision. Decisions that are purely administrative, scheduling, logistical, or educational are not. The distinction is about the nature of the decision the information informs, not about who makes it or how.

The scoping sentence is the filter. If your software's output is used for a diagnostic or therapeutic decision about an individual patient, you are in the decision-making branch. If your software watches a physiological parameter over time and does something with the signal, you are in the monitoring branch. If neither applies, you may be in the catch-all — but that test has to be argued explicitly, not assumed.

The decision-making branch — IIa, IIb, III in detail

The decision-making branch starts at Class IIa and escalates through two thresholds. Both thresholds are defined by the ceiling of clinical harm the software's information could cause if the decision it informs is wrong. Ceiling, not probability — Rule 11 does not ask how likely the software is to fail. It asks what the worst plausible consequence is if it does.

Class IIa — the default. Software in this branch is at least IIa. To stay at IIa, the clinical consequence of a wrong output has to be bounded. The patient could be harmed, but not to the level of serious deterioration, not into surgical intervention, not fatally, and not irreversibly. Most general-purpose clinical decision-support tools that support non-critical decisions — medication reminders with interaction checking, low-acuity triage, chronic disease management coaching — sit here when the intended purpose is written honestly.

Class IIb — the first escalation. The rule escalates to IIb when the decisions informed by the software could cause either of two things: a serious deterioration of a person's state of health, or a surgical intervention. Both triggers are independent, and meeting either one is enough.

"Serious deterioration" is the same phrase the MDR uses in the vigilance framework. It refers to clinical consequences where the patient's trajectory materially worsens — extended hospitalisation, permanent but non-fatal harm, significant functional loss, a condition that would not otherwise have occurred. It is not a minor, reversible setback. It is not a short-term side effect. It is a real, clinically significant change in the patient's course.

"Surgical intervention" covers both a surgical procedure triggered by a wrong output — for example, unnecessary surgery prompted by a false positive — and a surgical procedure avoided incorrectly because of a false negative. Both directions of error count. Software that contributes to the decision whether to operate is a candidate for IIb on the surgical-intervention trigger alone, regardless of the severity consideration.

Worked examples in MDCG 2019-11 Rev.1 that typically land at IIb include image analysis software for conditions where a missed finding leads to delayed serious treatment, decision support for medication dosing in conditions where errors cause significant harm, and scoring tools that drive referral to invasive procedures.

Class III — the second escalation. The rule escalates to III when the decisions informed by the software could cause either of two things: death, or an irreversible deterioration of a person's state of health. Both triggers are independent, and meeting either one is enough.

"Death" is unambiguous. "Irreversible deterioration" means harm from which the patient cannot recover — permanent organ damage, permanent functional loss, permanent disability of a kind that cannot be corrected by further treatment. Temporary serious harm is IIb. Permanent serious harm is III.

Class III SaMD is rare but real. Software that drives dosing for high-risk therapies where an overdose is fatal, software that interprets imaging for conditions where a missed finding is fatal within the clinical time window, and software that controls the behaviour of life-critical therapy systems are candidates. If your software's worst plausible failure mode is a fatality or a permanent, non-recoverable harm, the class is III and the conformity assessment route changes accordingly under MDR Article 52.

The question to ask at each level is the same: if the software's output is wrong and a clinician acts on it in good faith, what is the worst plausible clinical consequence? The answer to that question sets the class. Risk controls under EN ISO 14971:2019+A11:2021 reduce the residual risk inside the class you have landed in — they do not move the class. This is the point where MDD veterans most often miscalibrate.

The monitoring branch — IIa and IIb, and the conjunctive test

The monitoring branch covers software intended to monitor physiological processes. This is the branch for software that watches heart rhythms, respiratory parameters, glucose levels, EEG signals, haemodynamic indicators, and any other physiological signal processed over time to detect changes or produce clinical output.

Class IIa — the default. Software that monitors physiological processes is at least IIa. A glucose monitoring tool, a home sleep tracker with clinical claims, a chronic disease monitoring platform, and an ambulatory rhythm monitor are starting candidates.

Class IIb — the single escalation. The escalation to IIb depends on two conditions that both have to be satisfied. This is a conjunctive test, not a disjunctive one, and founders misread it in both directions.

The first condition is that the parameters being monitored are vital physiological parameters. "Vital" is the language of the rule. It refers to parameters whose acute behaviour is life-critical — the classical vitals (heart rate, respiratory rate, blood pressure, oxygen saturation, temperature in certain clinical settings) and extensions into parameters whose acute behaviour drives immediate clinical action in acutely unwell patients. Monitoring a non-vital parameter, even a clinically useful one, does not meet this condition.

The second condition is that the nature of the variations of those parameters is such that they could result in immediate danger to the patient. "Immediate" is doing work here. It refers to danger that arises on a timescale where minutes matter, not hours or days. A slow drift in a vital parameter that produces danger over days fails this condition. A sudden change that produces danger within minutes meets it.

Both conditions have to hold at once for the escalation to IIb to apply. Vital parameter plus immediate danger. Monitoring a non-vital parameter where rapid variations could cause harm stays at IIa. Monitoring a vital parameter where only slow drift matters stays at IIa. To reach IIb in the monitoring branch, the software has to be watching something vital and producing output on a timescale where a failure would leave the clinician with no time to recover.

There is no Class III inside the monitoring branch as written. A pure monitoring function caps at IIb. If a product combines monitoring with therapeutic decision support whose failure could cause death or irreversible deterioration, the product is assessed under the decision-making branch instead, and Class III becomes available through that branch. The classification sits in whichever branch captures the highest clinical consequence of the full intended purpose.

The Class I catch-all — when does it actually apply

The third layer of Rule 11 is "all other software" — software that qualifies as a medical device under Article 2(1) but does not provide information used for diagnostic or therapeutic decisions and does not monitor physiological processes. This is the Class I bucket. It is narrow.

Legitimate candidates for Class I under Rule 11 include patient diaries that do not interpret or score the entries, pure data-capture tools whose clinical output is zero, appointment and scheduling software with a medical intended purpose that does not cross into clinical decision support, and medical reference databases whose output is general information rather than patient-specific interpretation. The common thread is that the software qualifies as a medical device because of its intended purpose but does not do the specific things that Rule 11's first two layers cover.

The discipline test is this: if you think you are Class I, write the one-sentence argument for why your software is outside both the decision-making branch and the monitoring branch, and then try to falsify it against MDCG 2019-11 Rev.1. The falsification attempt usually succeeds. The software turns out to produce a score, flag an abnormality, recommend a follow-up, or track a physiological signal — any of which pulls it back into the first two layers. Founders who genuinely land at Class I after this exercise are the exception. Assume IIa until MDCG 2019-11 Rev.1 proves otherwise.

Class I under Rule 11 is still a regulated medical device. It still needs a QMS, technical documentation, a declaration of conformity, PMS, and vigilance. It just does not need a Notified Body for the conformity assessment in most cases — self-declaration is available. The class difference matters for cost and timeline, but it does not remove the core obligations.

MDCG 2019-11 Rev.1 — the worked examples as the calibration tool

MDCG 2019-11 Rev.1 — Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 — MDR and Regulation (EU) 2017/746 — IVDR — is the definitive interpretation of Rule 11. Revision 1 was published in June 2025 and is the current version. Anything older is superseded. (MDCG 2019-11, Revision 1, June 2025.)

For Rule 11 specifically, the worked examples in MDCG 2019-11 Rev.1 are the most valuable part of the document. The examples cover imaging (PACS and image-processing tools, image analysis for specific conditions), cardiology (ECG interpretation, arrhythmia detection), diabetes (insulin-dose calculators and continuous glucose monitoring interpretation), oncology (treatment-planning support), dermatology (lesion classification tools), and several other domains. Each example states the intended purpose, walks through the qualification test, and then applies Rule 11 to land on a defensible class.

The practical move is to find the example that most closely resembles your software, anchor your classification argument to it, and state the anchor explicitly in your technical documentation. "Our software is most similar to MDCG 2019-11 Rev.1 worked example X, and applies Rule 11 in the same way, reaching class Y because of Z." That sentence is much harder for a Notified Body to dispute than an argument built from the rule's text alone. For the full reading of the guidance, see post 374.

The other thing MDCG 2019-11 Rev.1 does that matters for edge cases is address modules. If your product is a platform with one medical module and the rest non-medical, the guidance tells you how to draw the boundary, scope the technical file around the module, and classify the module under Rule 11 without dragging the whole platform in. This is the scoping move most founders miss and the one that saves the most downstream work.

Edge cases startups actually hit

Six edge cases show up repeatedly in the calls we run with SaMD startups. Each one is a place where Rule 11's text alone does not settle the question and MDCG 2019-11 Rev.1 does.

The clinician-in-the-loop question. Founders argue their software is not decision-support because a clinician reviews every output. Rule 11 and MDCG 2019-11 Rev.1 both cover this — clinician-in-the-loop is within scope. The presence of a clinician does not drop the classification below IIa when the software's information is used for a diagnostic or therapeutic decision. The clinician is not the exit from Rule 11; the clinician is the user the rule was written for.

The decision-support-for-patients question. Software that produces output a patient uses to make a decision about their own care — a self-management tool with clinical output, a patient-facing risk score — is in the decision-making branch in the same way as software intended for clinicians. The branch is defined by the nature of the decision, not the identity of the decision-maker.

The research-use-only claim. Positioning software as research-use-only can keep it outside the MDR if the intended purpose is genuinely non-medical and every public claim matches. But once the public positioning, the marketing materials, or the clinical use pattern crosses into diagnostic or therapeutic claims, the research label does not rescue the classification. Rule 11 applies according to intended purpose, and intended purpose is established by what the manufacturer says across labelling, IFU, promotional materials, and the clinical evaluation under MDR Article 2(12).

The "informational only" framing. Founders try to move a classification down by describing the output as "informational" rather than "diagnostic" or "therapeutic". Rule 11 explicitly uses the word "information" — that framing does not escape the rule, it lands in the middle of it. If the information is used for a diagnostic or therapeutic decision, the branch applies.

The mixed-module platform. A platform that contains one medical module and several non-medical modules can often scope the regulatory work to the module rather than the whole platform. The boundary has to be drawn in writing, the interfaces specified, and the intended purpose of the medical module stated independently. MDCG 2019-11 Rev.1 addresses this explicitly and is the guidance a Notified Body will expect to see referenced in the scoping argument.

The AI output question. AI and ML outputs do not change Rule 11. The rule classifies by intended purpose and the ceiling of clinical harm, not by the algorithmic method that produced the output. A neural-network-based imaging interpretation tool classifies the same way a rule-based one with the same intended purpose would. What AI changes is the lifecycle discipline, the clinical evaluation expectations, the PMS around model drift, and the transparency the Notified Body expects — but the Rule 11 class itself is determined by the same questions as for any other software.

The Subtract to Ship angle — the intended purpose is the lever

Rule 11 is not a lever. The intended purpose is. Article 2(12) defines intended purpose as the use for which the device is intended according to the manufacturer's data on the label, in the instructions for use, in promotional and sales materials, and as specified in the clinical evaluation. The Rule 11 branch, the threshold, and the resulting class all depend on what the intended purpose actually is — not on the underlying technology, not on the engineering team's ambitions, not on the pitch deck.

Subtract to Ship applied to Rule 11 means writing the narrowest honest intended purpose that still describes the product you are actually shipping and the clinical problem you are actually solving. If the product genuinely does not need to make a surgical-intervention claim, do not make one. If the monitoring function is not genuinely for vital parameters on an immediate-danger timescale, do not describe it as if it is. If the output does not need to drive a decision that could cause irreversible harm, scope it so it does not. The goal is to match the regulatory claim to the product — no more, no less — so the class falls where the reality puts it.

The opposite trap is also common and it is where founders lose the most. An expansive, aspirational intended purpose written for the pitch deck and then copied into the technical documentation has already written the product into a higher class before any engineering has been done. Marketing language and regulatory language are not the same language. Keep them consistent — every public claim has to match — but keep the regulatory text tight. Over-claiming to impress investors and under-claiming to escape the Notified Body fail at different stages of the same project.

The lean approach is the same one that runs through every chapter of the book. Scope the intended purpose honestly and narrowly. Let Rule 11 classify the product on the basis of what it actually is. Build for that class. For the framework that sits behind this discipline, see post 065.

Reality Check — Can you defend your Rule 11 classification branch by branch?

  1. Have you confirmed your software qualifies as MDSW under MDR Article 2(1) through the MDCG 2019-11 Rev.1 decision tree, in writing, before applying Rule 11?
  2. Walking Rule 11's first sentence, does your software land in the decision-making branch, the monitoring branch, or the catch-all? Which branch and why, stated in one paragraph?
  3. In the decision-making branch, what is the worst plausible clinical consequence if the software output is wrong: bounded harm (IIa), serious deterioration or surgical intervention (IIb), or death or irreversible deterioration (III)? Can you defend the level with a specific clinical scenario?
  4. In the monitoring branch, are the parameters genuinely vital, and can variations genuinely result in immediate danger on a timescale of minutes? Both conditions, or just one? If just one, the class is IIa.
  5. If you believe you are in the Class I catch-all, have you written down why the first two branches do not apply and tried to falsify that argument against MDCG 2019-11 Rev.1?
  6. Have you cross-checked your classification against the worked examples in MDCG 2019-11 Rev.1 and identified the closest matching example as an anchor?
  7. If your product is a platform, have you drawn the module boundaries explicitly and scoped Rule 11 to the medical module rather than the whole platform?
  8. Does every public claim — website, app store listing, marketing materials, clinical evaluation — match the intended purpose you are relying on for your Rule 11 class?
  9. Is your EN 62304:2006+A1:2015 software safety class (A, B, C) consistent with the MDR device class you have reached under Rule 11?

Any question you cannot answer with a clear yes is a gap. Close it before the engineering team commits another sprint on top of the current assumption.

Frequently Asked Questions

What is the difference between the decision-making branch and the monitoring branch of Rule 11? The decision-making branch covers software intended to provide information used for diagnostic or therapeutic decisions — scores, classifications, recommendations, alerts that a clinician or patient uses to decide what to do. The monitoring branch covers software intended to monitor physiological processes — continuous or intermittent watching of a physiological signal over time. Some software sits in both; in that case, Rule 11 applies according to whichever branch captures the highest ceiling of clinical consequence from the full intended purpose.

Can a Rule 11 classification drop because of risk controls under EN ISO 14971:2019+A11:2021? No. Rule 11 is set by the intended purpose and the ceiling of clinical harm, not by how well the residual risk is controlled. Risk controls reduce residual risk inside the class you are in; they do not move you to a lower class. This is the single most common misconception among founders arriving from the MDD, where rule interpretations sometimes behaved differently in practice.

Does the monitoring branch have a Class III level? No. As written in Rule 11, the monitoring branch caps at Class IIb. A pure monitoring function cannot reach Class III through the monitoring branch alone. If a product's intended purpose combines monitoring with therapeutic decision support whose failure could cause death or irreversible deterioration, the product is assessed in the decision-making branch and Class III becomes reachable through that branch.

If my software supports a clinician who makes the final decision, am I still in the decision-making branch? Yes. Rule 11 is written to cover clinical decision support, and MDCG 2019-11 Rev.1 confirms this reading. The clinician being in the loop does not remove your software from the branch. The word "decisions" in the rule includes decisions made by clinicians using the software's output as one input among several.

How do I decide between IIb and III in the decision-making branch? Ask whether the worst plausible harm from acting on a wrong output is recoverable or permanent. Serious but recoverable harm (extended hospitalisation, significant but reversible functional loss, surgical intervention a patient recovers from) lands at IIb. Permanent, non-recoverable harm (death, permanent organ damage, permanent functional loss) lands at III. The worked examples in MDCG 2019-11 Rev.1 are the most reliable calibration tool for borderline cases.

Does Rule 11 treat AI and ML software differently from traditional software? No. Rule 11 classifies by intended purpose and the ceiling of clinical harm, not by the underlying algorithmic method. AI and ML features do not change the branch or the threshold. What AI changes is the lifecycle discipline, the clinical evaluation expectations, the post-market surveillance around model drift, and the transparency and documentation the Notified Body will expect. The EU AI Act adds a separate regulatory layer on top of the MDR for AI systems, but Rule 11 itself is determined the same way.

Does Class I software still need CE marking? Yes. Class I is a medical device class — it still requires a declaration of conformity, a QMS, technical documentation, PMS, and vigilance. What Class I typically does not require is Notified Body involvement in the conformity assessment for the device itself (self-declaration is available in most cases). That changes the cost and timeline meaningfully but does not remove the core obligations.

Sources

  1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 2 (points 1 and 12), Article 51 (Classification of devices), and Annex VIII (Classification rules), Rule 11. Official Journal L 117, 5.5.2017.
  2. MDCG 2019-11 — Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 — MDR and Regulation (EU) 2017/746 — IVDR. First published October 2019; Revision 1, June 2025. Published by the Medical Device Coordination Group, European Commission.
  3. EN 62304:2006+A1:2015 — Medical device software — Software life-cycle processes (IEC 62304:2006 + IEC 62304:2006/A1:2015).

This post is a spoke in the Device Classification & Conformity Assessment category of the Subtract to Ship: MDR blog, and a cross-cluster deep dive for the Software as a Medical Device Under MDR category. Authored by Felix Lenhard and Tibor Zechmeister. The MDR is the North Star for every claim in this post — MDCG 2019-11 Rev.1 is the authoritative interpretation of Annex VIII Rule 11, and EN 62304:2006+A1:2015 is referenced as the lifecycle tool that sits alongside the Rule 11 device class rather than replacing it. For startup-specific regulatory support on Rule 11 classification, module scoping, and the intended-purpose lever, Zechmeister Strategic Solutions is where this work is done in practice.