MDCG 2019-11 Rev.1 (June 2025) contains a set of worked software classification examples that walk a product through MDR Article 2(1) qualification and Annex VIII Rule 11 classification and show where it lands. For startups, these examples are the single best calibration tool for a Rule 11 argument: they show the reasoning a Notified Body is trained to accept, not just the outcome. This post walks through the example types that matter most for MedTech startups — imaging, cardiology, diabetes, dermatology, oncology planning, and decision support — explains what each one teaches, flags the common misapplications, and then walks a startup-specific example end to end.

By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.


TL;DR

  • MDCG 2019-11 Rev.1 (June 2025) is the current version of the Commission's software qualification and classification guidance. Any older copy is superseded.
  • The worked examples in the guidance are not decoration. They are the calibration set Notified Bodies use when they reason about borderline products.
  • The example types most relevant for MedTech startups are imaging, cardiology, diabetes management, dermatology, oncology planning support, clinical decision support, and multi-module platforms.
  • Every worked example runs the same procedure: state the intended purpose, apply the Article 2(1) qualification test, then apply Annex VIII Rule 11 and land on a class.
  • Most examples land at Class IIa or higher. The Class I catch-all in Rule 11 is narrow and the examples make that narrowness visible.
  • Founders misapply the examples most often by skimming the outcome instead of reading the reasoning, or by picking the example that matches the technology instead of the one that matches the intended purpose.
  • A startup's own classification memo should anchor to the closest worked example explicitly, not just reach the same conclusion by a parallel route.

How MDCG 2019-11 Rev.1 uses examples

The body of MDCG 2019-11 Rev.1 sets out the qualification logic for Medical Device Software (MDSW) under MDR Article 2(1) and the classification logic under Annex VIII Rule 11. The examples sit alongside that text to show the logic in motion on products that look like real software, not abstractions.

Each example follows a fixed shape. It opens with the intended purpose in the form MDR Article 2(12) expects — what the software does, for whom, in what clinical context, with what output. It runs the four-step qualification test: is it software, does it perform an action on data beyond storage and transmission, is the action for the benefit of an individual patient, and does the intended purpose fall inside Article 2(1). If the software qualifies as MDSW, it then runs Rule 11: decision-making branch, monitoring branch, or the "all other software" catch-all. It lands on a class and states the reasoning.

This shape matters because it is the shape your own classification memo should take. The examples are not just illustrations. They are the template for the argument your technical file needs. Notified Body reviewers are trained on this guidance, and a classification argument that mirrors its structure is much easier to accept than one that arrives at the same class by a different route. (MDCG 2019-11 Rev.1, June 2025.)

Walking through the example types that matter to startups

The sections below walk through the example categories in MDCG 2019-11 Rev.1 that show up repeatedly in MedTech startup classification work. We are describing the shape and reasoning of each example type — not transcribing the guidance. Read the document itself for the exact wording before you write your own memo.

Imaging software — PACS, image processing, and CAD

The imaging examples cover software that handles medical images across a spectrum from pure storage and communication (PACS-style) to image processing to computer-aided detection and diagnosis. The guidance distinguishes clearly between the ends of the spectrum.

Software whose only action is archiving, retrieving, or lossless compression of images does not cross the action-on-data filter in the qualification test and is typically not MDSW at all. Software that processes images for display without producing clinical output — windowing, levelling, format conversion — sits at a low-intensity end and may not qualify as MDSW depending on the exact intended purpose. Software that produces clinically relevant output from images — segmentation of a lesion, detection of a suspicious finding, quantification of a volume, classification of a pattern — qualifies as MDSW and enters Rule 11 in the decision-making branch at Class IIa. Escalation to IIb follows when the wrong output could cause serious deterioration of health or trigger a surgical intervention; escalation to III follows when the wrong output could cause death or irreversible deterioration.

What the imaging examples teach is the distinction between handling an image and interpreting an image. The qualification filter sits exactly at the point where interpretation starts, and the classification escalation follows the severity of the decisions the interpretation supports. (Regulation (EU) 2017/745, Annex VIII, Rule 11.)

Cardiology software — ECG interpretation and arrhythmia detection

The cardiology examples cover software that takes physiological signals — ECG traces, rhythm strips, haemodynamic streams — and produces outputs used in clinical reasoning. The pattern is instructive because cardiology software often straddles the decision-making branch and the monitoring branch of Rule 11.

Software that analyses a recorded ECG and outputs an interpretation — rhythm classification, morphology findings, automated measurements used by a clinician reading the trace — is in the decision-making branch. It is Class IIa by default, IIb if the decisions it informs could cause serious deterioration or a surgical intervention, and III if they could cause death or irreversible deterioration.

Software that continuously monitors a cardiac physiological parameter in real time and alerts on specified variations sits in the monitoring branch. It is Class IIa unless the parameter is vital and the variations could result in immediate danger, in which case it is IIb. A software that watches for a life-threatening arrhythmia in an acute setting and alerts clinicians is a IIb candidate in the monitoring branch.

A single product can contain both modes — an offline interpretation feature and a live monitoring feature — and the classification memo needs to reason about each mode separately and then state the overall class as the highest of the two.

Diabetes management software — insulin calculators and dose advisors

The diabetes examples are among the most instructive for startups because they make the decision-making branch concrete. Software that takes patient inputs — glucose readings, carbohydrate estimates, activity data — and outputs an insulin dose recommendation is squarely in the decision-making branch. The output is information used to take a therapeutic decision. Rule 11 applies, the class starts at IIa, and the escalation question is about the ceiling of harm if the recommendation is wrong.

A bolus calculator that recommends a dose for routine mealtime use in a stable patient with clinician-set parameters is typically IIa. An advisor that operates in settings where a wrong dose could cause severe hypoglycaemia with acute neurological or cardiac consequences escalates to IIb. Software that is part of a closed-loop therapy where a wrong output could directly cause death or permanent harm escalates to III.

The diabetes examples also show what does not qualify or stays at Class I. A pure glucose logbook with no interpretation, no scoring, and no dose guidance may not qualify as MDSW at all — it is storage and communication. The moment the software adds interpretation or recommendation, it crosses into MDSW and into Rule 11.

Dermatology software — lesion classification

The dermatology examples cover software that takes images of skin lesions and outputs a classification or risk score — benign, suspicious, or some intermediate category used by a clinician or, in some products, directly by a patient. These are squarely in the decision-making branch of Rule 11. A wrong "benign" output for a lesion that is in fact malignant could delay diagnosis and cause serious or irreversible deterioration. The class is therefore at minimum IIa and frequently IIb depending on the exact intended purpose and clinical setting.

What this example type teaches is that the ceiling of harm in Rule 11 is assessed on the worst plausible consequence of a wrong output, not the typical consequence. A lesion classifier that is right most of the time can still be IIb if the failure mode is a missed melanoma.

Oncology planning support

The oncology examples cover software that supports treatment planning — dose calculations for radiotherapy, contouring support, treatment comparison tools. These are decision-making software in the most consequential sense and frequently land at IIb or III because the decisions they inform can cause death or irreversible deterioration if the software is wrong. The examples make the Class III end of Rule 11 concrete and show that it is not purely theoretical.

Clinical decision support — the general case

The clinical decision support (CDS) examples in the guidance confirm that CDS is inside Rule 11, not outside it. The old idea that "a clinician in the loop" drops software out of the decision-making branch is explicitly refuted by MDCG 2019-11 Rev.1. Software that provides information used by clinicians to take diagnostic or therapeutic decisions is in the decision-making branch regardless of how much the clinician reviews, overrides, or contextualises the output. The class follows the ceiling of harm, not the degree of human oversight. This is the example type most often misread by founders who arrived from the US Cures Act Section 3060 world, where the CDS exemption logic is different. Under the MDR it is not.

Multi-module platforms

The guidance addresses products where a medical module sits inside a larger non-medical platform. The examples show that the classification memo has to scope the medical module precisely — its boundaries, its interfaces with the non-medical parts, and the data flows in and out. The module is classified under Rule 11 on its own terms. The surrounding platform is not dragged into the MDR if the boundaries are drawn well. This is the example type that most directly enables a Subtract to Ship scoping move for platform-style startups.

What each example teaches

Across the example types, the worked cases in MDCG 2019-11 Rev.1 teach the same set of lessons repeatedly.

The intended purpose is the lever. Two products with similar technology but different intended purposes can land on different branches of Rule 11 and different classes. The examples make this visible because they always start from the intended purpose and derive the class from it.

The ceiling of harm sets the escalation. The question inside Rule 11 is not "how often does the software fail" but "if it fails, what is the worst plausible clinical consequence". The examples escalate to IIb and III based on the consequence, not on the probability or on the presence of risk controls.

The Class I catch-all is narrow. Across all the example types, very few land in the "all other software" bucket, and the ones that do typically fail the qualification test earlier and never reach Rule 11 at all.

Modules can be scoped. The examples that contain both medical and non-medical parts show that a clean module boundary is a legitimate regulatory move and the guidance supports it.

The reasoning is the argument. In every example, the class is defended by walking through the test. The class is not asserted; it is derived. Your own memo has to derive the class in the same way.

Common misapplications

Founders and inexperienced consultants misread the MDCG 2019-11 Rev.1 examples in predictable ways.

Picking the example that matches the technology instead of the intended purpose. An imaging startup may anchor to the "imaging software" example group even though the closest match for their intended purpose is actually a decision-support example from a different section. The technology is not the lever; the intended purpose is.

Skimming the outcome and skipping the reasoning. The worked examples exist to teach the reasoning pattern. A memo that says "this product is like example X, therefore Class IIa" is not a classification argument. A memo that runs the same four-step qualification test and the same Rule 11 walkthrough on the startup's product, and then notes the similarity to example X as corroboration, is.

Assuming the examples are exhaustive. The guidance cannot enumerate every possible product. If your software does not resemble any example cleanly, the examples still teach you the shape of the argument. You run the test yourself and document the reasoning in writing.

Using outdated examples. The June 2025 Rev.1 of MDCG 2019-11 updated and extended the examples compared with the October 2019 original. A classification memo that cites the older examples is working from a superseded text.

Ignoring the module boundary. When the guidance clearly supports module scoping, some founders still over-scope their classification by treating the whole platform as MDSW. This is expensive and often wrong.

Treating a CDS exemption mindset as valid under the MDR. CDS does not get a carve-out from Rule 11 in the EU. Founders who arrive from a US regulatory background and assume it does produce classification memos that collapse at first Notified Body contact.

A startup-specific example, walked end to end

Consider a realistic startup case. The product is a mobile application that takes a set of patient-specific inputs — a few patient-reported symptoms, some vital signs from a connected wearable, and a short questionnaire — and produces a triage recommendation for ambulatory patients with suspected acute conditions. The output is a category (low, medium, high urgency) and a suggested next step (home care, GP appointment, emergency department). The stated intended purpose is to support triage decisions by patients and by GP practice staff in a specific indication.

Walk it through the qualification test.

Is it software? Yes, a mobile application and a connected cloud service.

Does it perform an action on data beyond storage and transmission? Yes — it processes the inputs and produces a categorisation and recommendation. The action filter is crossed.

Is the action for the benefit of an individual patient? Yes — every output is a categorisation for a specific individual whose data was processed.

Does the intended purpose fall within MDR Article 2(1)? The intended purpose is to support decisions about whether an individual patient needs urgent care. That is a medical purpose — specifically, it touches on diagnosis (is this a condition that warrants attention?), on prediction (is this patient at risk of deterioration in the near term?), and on the choice between care settings. Yes.

The software qualifies as MDSW. Apply Rule 11.

Is it in the decision-making branch? The output is information — a triage category and a recommended next step — that is used by the patient or by practice staff to take a decision with a diagnostic or therapeutic purpose. Yes.

What is the ceiling of harm if the output is wrong? A false "low urgency" output on a patient who is actually deteriorating could delay care and cause serious deterioration. In some indications, that delay could cause death or irreversible deterioration. The escalation follows the indication.

For a triage product in a general-symptom indication, the class lands at IIb — decisions made on a wrong "low" output could cause serious deterioration or an avoidable surgical intervention. For a triage product in an indication where a missed case is routinely fatal within the triage window, the argument pushes toward III.

The startup classification memo would state this reasoning in writing, anchor to the closest worked example in MDCG 2019-11 Rev.1 — in this case a decision-support example in the relevant clinical domain — and note the match explicitly. The result is a defensible IIb position that a Notified Body can review in the same structural frame the guidance uses.

The Subtract to Ship angle

The worked examples in MDCG 2019-11 Rev.1 are a Subtract to Ship tool in two distinct ways.

First, they shorten your classification argument. Instead of deriving a class from first principles every time, you run the test, anchor to the closest example, and the derivation is half done. Less work, same outcome, more defensible.

Second, they sharpen the intended purpose. Running your product through the example set forces you to decide which example it actually matches, and that decision usually exposes an intended purpose that is broader or vaguer than it needs to be. Tightening the intended purpose — honestly and narrowly, matching what the product actually does and the market it actually serves — is the single highest-leverage subtraction move in MDR work for software. The examples are the mirror that lets you see where the intended purpose is loose. For the methodology itself, see post 065.

Reality Check — Are you using the MDCG 2019-11 Rev.1 examples correctly?

  1. Do you have the June 2025 Revision 1 of MDCG 2019-11, not an older copy?
  2. Have you read the examples section in full, not just skimmed the parts that touch your product category?
  3. Have you identified which worked example most closely matches your intended purpose — not your technology?
  4. Does your classification memo walk the four-step qualification test in writing, in the same shape the examples use?
  5. Does your Rule 11 application explicitly state the branch (decision-making, monitoring, or catch-all) and the escalation reasoning?
  6. Is the ceiling of harm you have assessed consistent with the example you are anchoring to, or is there a gap you need to explain?
  7. If your product has multiple modules, have you scoped the medical module explicitly and classified it on its own terms?
  8. Has your classification memo been read by someone other than the founder who wrote it?

Any question you cannot answer with a clear yes is a gap. Close it before the Notified Body discussion begins, not during it.

Frequently Asked Questions

How many worked examples are in MDCG 2019-11 Rev.1? The guidance contains worked examples across several clinical and functional categories, including imaging software, cardiology software, diabetes management, dermatology, oncology planning support, clinical decision support, and multi-module platforms. The June 2025 Rev.1 extended and updated the example set compared with the original October 2019 version. Read the document itself for the exact current list before relying on a specific example in your memo.

Can I use an MDCG example directly as my own classification argument? No — you need to run the qualification test and the Rule 11 walkthrough on your own product, document the reasoning, and then note the similarity to the closest worked example as corroboration. Citing an example without running your own test is not a classification argument; it is an assertion.

Do the examples cover AI and machine learning medical software? The Rule 11 logic applies to AI software the same way it applies to any other software — the classification depends on intended purpose and ceiling of harm, not on the underlying algorithmic method. The examples relevant to your product are the ones that match the intended purpose, regardless of whether the implementation is rule-based or learned. The EU AI Act sits as a separate regulatory layer on top of the MDR for AI systems.

What if my software does not match any example? The examples are not exhaustive. If your product does not cleanly match any example, you still run the same qualification and Rule 11 procedure in writing. The examples teach the shape of the argument; the procedure is what you apply. A well-reasoned memo that acknowledges the absence of a close match and runs the test thoroughly is acceptable. An under-reasoned memo that forces a weak analogy is not.

Are the examples binding on a Notified Body? MDCG guidance is not legally binding in the way the MDR text is. In practice, Notified Bodies reason in line with MDCG guidance, including the worked examples, and a classification memo that contradicts a clearly applicable example without a strong, documented argument will not pass review. Treat the examples as authoritative for practical purposes while citing the MDR articles for the binding positions.

How often should I re-check the examples? Every time the MDCG publishes a revision of 2019-11, you re-check. The June 2025 Rev.1 superseded the October 2019 original. Treat the guidance as a living document and version-stamp every classification memo with the revision of MDCG 2019-11 it was written against.

Sources

  1. MDCG 2019-11 — Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 — MDR and Regulation (EU) 2017/746 — IVDR. First published October 2019; Revision 1, June 2025. Published by the Medical Device Coordination Group, European Commission.
  2. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 2, point 1 (definition of medical device); Annex VIII, Rule 11 (classification of software). Official Journal L 117, 5.5.2017.
  3. EN 62304:2006+A1:2015 — Medical device software — Software life-cycle processes (IEC 62304:2006 + IEC 62304:2006/A1:2015).

This post is a spoke in the Device Classification & Conformity Assessment category of the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. The MDR is the North Star for every claim in this post — MDCG 2019-11 Rev.1 is the Commission's operational reading of MDR Article 2(1) and Annex VIII Rule 11, and the worked examples in that guidance are the calibration set every software classification memo should anchor to. For startup-specific regulatory support on MDSW classification and on anchoring a technical file to the right MDCG examples, Zechmeister Strategic Solutions is where this work is done in practice.