Clinical decision support software (CDSS) is a medical device under MDR when its intended purpose meets Article 2(1) and when it performs an action on patient data that produces information a clinician uses to take a diagnostic or therapeutic decision. The fact that a human clinician is "in the loop" does not remove the qualification. MDCG 2019-11 Rev.1 (June 2025) is explicit: clinician supervision does not, on its own, exempt decision-support software from medical device status. Once qualified, CDSS falls under Annex VIII Rule 11 and the floor is Class IIa — not Class I.
By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.
TL;DR
- Clinical decision support software is software that produces patient-specific information — scores, flags, alerts, recommendations — that a clinician uses to take a diagnostic or therapeutic decision.
- CDSS is qualified as a medical device under MDR Article 2(1) when the intended purpose is medical and the software acts on data beyond storage, communication, or simple search.
- The "human in the loop" argument — that clinician oversight exempts the software from medical device status — does not hold under MDCG 2019-11 Rev.1. Clinician review is a risk control, not a qualification filter.
- Once qualified, CDSS is classified under Annex VIII Rule 11. The default is Class IIa. Class IIb and III apply when wrong outputs could cause serious deterioration of health, surgical intervention, or death.
- MDCG 2021-24 (October 2021) gives the broader classification framework, and MDCG 2019-11 Rev.1 gives the software-specific decision tree and worked examples for CDSS.
- A narrow, honest intended purpose is the strongest lever a founder has. An expansive marketing claim is the fastest way to escalate the class.
What clinical decision support software actually is
Clinical decision support software covers a broad product category. The common element is that the software ingests patient data — lab values, imaging, vital signs, medication lists, genetic markers, structured EHR fields, free-text notes — and produces an output that a clinician uses while deciding what to do next for that patient. The output can be a risk score, a priority flag on a worklist, a differential diagnosis suggestion, a drug interaction alert, a treatment recommendation, a referral suggestion, an imaging finding overlay, or a threshold alarm.
Two properties distinguish CDSS from adjacent software categories. First, the output is patient-specific — it depends on data from one individual and it speaks to the care of that individual. Population-level dashboards, public-health analytics, and research cohort tools are not CDSS in the MDR sense. Second, the output is decision-relevant — it is produced for the purpose of influencing a care decision, not merely for the purpose of reporting a fact. A dashboard that shows a patient's most recent lab value is not CDSS. A dashboard that shows the same lab value next to a red flag because the value crosses a clinical threshold is CDSS, because the flag is the decision input.
Modern CDSS increasingly uses machine learning. That does not change the qualification question. MDR Article 2(1) and MDCG 2019-11 Rev.1 do not treat AI differently from rule-based software for qualification or classification purposes. An ML model producing a sepsis risk score and a hand-written SQL query producing the same score are the same category under Rule 11.
The qualification question — is the CDSS a medical device?
Qualification is binary. Either the software is a medical device under Article 2(1) or it is not. Article 2(1) defines "medical device" and names software explicitly:
"'medical device' means any instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes: diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease..." — Regulation (EU) 2017/745, Article 2, point 1.
Two words in that sentence decide the CDSS question. "Intended" anchors the test to what the manufacturer says the software is for — labelling, IFU, promotional materials, the clinical evaluation. "Prediction" and "prognosis" are listed alongside diagnosis and treatment as medical purposes, which closes off one of the most common escape arguments: software that does not "diagnose" but does "predict" is squarely inside the definition.
MDCG 2019-11 Rev.1 provides the decision tree that Notified Bodies and Competent Authorities use to walk through this question consistently. The key steps for CDSS are: is it software; does it perform an action on data beyond storage, archival, lossless compression, communication, or simple search; is the action for the benefit of individual patients; and is the intended purpose one of the medical purposes listed in Article 2(1). A CDSS product that answers yes to all four is a medical device. A product that answers no to any one of them is not.
The place where CDSS products most often cross the line they did not mean to cross is the "action on data" step. Scoring, interpreting, classifying, alerting, recommending, and prioritising are all actions on data under MDCG 2019-11 Rev.1. Pure storage, pure display, and pure communication are not. Founders who describe their product as "just surfacing the data the clinician already has" often discover during the decision tree walk-through that the scoring layer or the prioritisation layer is exactly what takes them across the line.
The "human in the loop" criterion — and why it does not save you
The most common misconception we see in CDSS startups is the belief that clinician oversight exempts the software from medical device status. The logic goes: the doctor makes the final decision, so the software is only a reference tool, and reference tools are not medical devices. MDCG 2019-11 Rev.1 closes this argument off explicitly.
The guidance is unambiguous on this point. CDSS software that produces information used by clinicians to take diagnostic or therapeutic decisions is within scope of the MDR even when a clinician reviews and approves every output. The reasoning is that the software is designed and placed on the market for the specific purpose of informing the clinical decision. Whether the clinician follows the recommendation, overrides it, or ignores it is a question about risk and usage — it is not a question about qualification.
The distinction that matters is not clinician presence but whether the software's output is designed to inform the clinical decision at all. If the output is a reference document — a static guideline, an unfiltered literature database, an educational summary — and the clinician has to independently evaluate and apply it, the software may not qualify. If the output is patient-specific and is produced for the purpose of feeding a decision about that patient, the software qualifies regardless of how many clinicians review it.
Clinician review is a risk control. It reduces the probability that a wrong output reaches the patient. Under EN ISO 14971:2019+A11:2021, risk controls reduce residual risk within a class. They do not change the class, and they do not change the qualification. This is the same principle that applies to Rule 11 classification generally: the intended purpose and the ceiling of harm set the class, and risk controls operate inside it.
The practical consequence for founders: do not write "clinician in the loop" into the intended purpose in the hope that it reclassifies or exempts the product. Notified Bodies read that language as a risk control description, not a qualification argument, and they apply Rule 11 anyway.
When CDSS is a medical device and when it is not
CDSS is a medical device under MDR when all of the following hold. The software acts on patient data in a way that goes beyond storage, display, or pure search. The output is specific to an individual patient. The output is produced for the purpose of informing a diagnostic, prognostic, monitoring, predictive, preventive, or therapeutic decision. The manufacturer's stated intended purpose describes one of the medical purposes in Article 2(1).
CDSS is not a medical device under MDR when the opposite holds on any of those axes. A pure drug reference database that returns the same information to any clinician regardless of patient context is a reference tool, not CDSS. A guideline library that displays the relevant clinical guideline without patient-specific application is a reference tool. A training simulator that lets clinicians practise on fake cases is an educational tool. A population-health dashboard that aggregates hospital-wide statistics without producing individual-patient outputs is an administrative tool.
The borderline cases are where the real work sits. A CDSS that lives on the boundary between reference and decision support is in the Rule 11 scope if the output is patient-specific and decision-relevant, and it is outside if both of those tests fail. MDCG 2019-11 Rev.1 addresses several recurring borderlines in its worked examples — CDS on drug interactions, on imaging workflow prioritisation, on risk scoring for chronic conditions — and those examples are the calibration tool when your own product sits near the edge.
MDCG 2021-24 (October 2021) provides the broader classification framework for medical devices and its role for CDSS is to place Rule 11 in the context of the other classification rules and the general structure of Annex VIII. For most CDSS products Rule 11 is the operative rule, and MDCG 2021-24 confirms that software-specific classification runs through Rule 11 rather than through the general active-device rules.
Classification under Rule 11 — the CDSS reality
Once CDSS qualifies, Annex VIII Rule 11 applies. For CDSS specifically, the decision-making branch of Rule 11 is almost always the operative one — the whole point of CDSS is to inform a clinical decision, which is exactly what the decision-making branch covers.
The Rule 11 hierarchy for CDSS:
Class IIa is the default for CDSS that informs diagnostic or therapeutic decisions with bounded clinical consequences. A recommendation whose wrong version could cause harm but not serious deterioration of health, not surgical intervention, not death, not irreversible deterioration, stays at IIa. This is where most CDSS products land.
Class IIb applies when the wrong output could cause serious deterioration of health or a surgical intervention. A CDSS that drives decisions about urgent surgical referral, about discharging or admitting an acutely unwell patient, or about selecting or withholding high-impact therapies is a IIb candidate.
Class III applies when the wrong output could cause death or irreversible deterioration. CDSS that informs dosing of high-risk therapies, CDSS that drives time-critical decisions in conditions where missed findings are fatal, and CDSS that interprets imaging for conditions where false negatives kill are Class III candidates. Class III is rare for CDSS but it is not empty.
Class I for CDSS is almost always the wrong assumption. Under Rule 11, reaching Class I requires an intended purpose that does not inform diagnostic or therapeutic decisions at all — which defeats the purpose of calling the product "decision support." The phrase "clinical decision support" in the product name is, by itself, strong evidence that Class I is not the right answer.
Rule 11 is indifferent to the algorithmic method. AI, machine learning, rule-based logic, statistical models — all classified the same way under Rule 11, because the rule depends on the intended purpose and the ceiling of harm, not on how the output is computed. Post 081 covers Rule 11 mechanics in full and post 085 covers the decision-making branch in depth.
Examples by use case
Sepsis early-warning scoring across ICU patients. Patient-specific output, decision-relevant, medical purpose (prediction and monitoring of disease), action on data beyond display. Qualifies as medical device. Rule 11 decision-making branch. The ceiling of harm — missed sepsis can be fatal — argues for IIb at minimum and, depending on the intended purpose formulation, potentially III.
Drug interaction checker that flags potential contraindications at prescribing. Patient-specific (depends on that patient's current medication list), decision-relevant, medical purpose. Qualifies. Rule 11 decision-making branch. Class depends on the ceiling of harm — most products land IIa, but interactions that could cause fatal reactions push the argument upward.
Radiology worklist prioritisation that reorders the reading queue based on suspected urgency. Patient-specific, decision-relevant (the radiologist reads flagged studies first), medical purpose. Qualifies. Rule 11. Class depends on the clinical consequence of delayed reading — typically IIa, potentially IIb for time-critical conditions.
Imaging finding highlight overlay. Patient-specific, decision-relevant, medical purpose. Qualifies. Rule 11. Often IIb because wrong outputs — missed findings or false positives — can lead to serious deterioration or surgical intervention in the conditions where this software is deployed.
Static clinical reference library indexed by ICD code. Not patient-specific in the output. Reference tool, not CDSS in the MDR sense. Does not qualify as a medical device provided the intended purpose stays purely reference.
Patient-specific treatment recommendation engine for a chronic condition. Patient-specific, decision-relevant, medical purpose. Qualifies. Rule 11 decision-making branch. Class depends on the severity of consequences from wrong recommendations — commonly IIa to IIb.
Common founder errors on CDSS qualification
"The clinician makes the decision, so we are not a medical device." Addressed above. MDCG 2019-11 Rev.1 rejects this argument directly.
"We only show information, we do not recommend." Rule 11 covers "information used to take decisions." If the information is produced for the purpose of feeding a clinical decision, the branch applies. The word "recommend" is not the threshold.
"Our model was trained on published data, so we do not need clinical evidence." Training data has nothing to do with whether clinical evidence is required. Clinical evaluation under MDR Article 61 applies to every medical device, and the evidence required is independent of how the algorithm was built.
"We are a reference tool with some extra features." If the "extra features" include patient-specific scoring, alerting, or recommendation, those features are the medical device and the reference framing does not rescue them. Most CDSS qualification mistakes we see come from treating the decision-support layer as a bolt-on that inherits the reference-tool regulatory status of the rest of the product.
"The AI Act covers our AI, so we do not need MDR." The EU AI Act adds a layer on top of the MDR for AI systems. It does not substitute for the MDR. A CDSS that qualifies under Article 2(1) is a medical device under the MDR regardless of the AI Act's treatment of it.
"Our Class I assumption is based on how a similar product was regulated under the MDD." Rule 11 under the MDR is substantially different from the MDD software rules. Legacy MDD Class I positions are not benchmarks for MDR classification. Post 081 walks through the up-classification reality in full.
The Subtract to Ship angle for CDSS
The lever on CDSS classification is the same lever that works across all SaMD: the intended purpose. Article 2(1) and Rule 11 both depend on what the manufacturer says the software is for. Subtract to Ship, applied to CDSS, means writing the narrowest intended purpose that honestly describes the product and the market, and letting Rule 11 fall where it falls against that narrow purpose.
The common trap is that CDSS founders, under pressure to impress investors or partners, write expansive intended purposes that reach into clinical consequences the product does not actually need to reach. A product that could honestly be scoped as "supports clinicians in prioritising routine reviews" gets written up as "enables early detection of life-threatening deterioration," and the Rule 11 class escalates accordingly. The marketing text leaks into the regulatory text. The Notified Body reads both.
The discipline is to keep the regulatory intended purpose tight, keep the marketing consistent with it, and let the class reflect the product rather than the pitch. This is not gaming the rule — it is matching the claim to reality. Post 065 covers the Subtract to Ship framework in full and post 449 and post 450 cover the intended-purpose discipline for CDSS specifically.
Reality Check — where does your CDSS stand?
- Have you written down your CDSS intended purpose in one paragraph, in the form Article 2(12) expects, and does every public claim on your website, app store, and marketing materials match it?
- Have you walked your product through the MDCG 2019-11 Rev.1 qualification decision tree, in writing, and reached a defensible yes-or-no on qualification?
- Are you leaning on "clinician in the loop" as a qualification argument? If yes, rewrite the argument without it and see whether it still holds.
- If you believe you are in Rule 11, which branch applies — decision-making, monitoring, or catch-all — and can you defend the choice?
- What is the worst plausible clinical consequence if your CDSS output is wrong and is acted on? Does that map cleanly to IIa, IIb, or III?
- Have you cross-checked your classification against the worked examples in MDCG 2019-11 Rev.1 and against the framework in MDCG 2021-24?
- Does your Rule 11 argument survive if you remove every sentence describing risk controls, clinician oversight, and usage guardrails, leaving only the intended purpose?
Any question you cannot answer with a clear yes is a gap. Close it before the engineering team builds another sprint on assumptions that may not hold.
Frequently Asked Questions
Is clinical decision support software always a medical device under the MDR? No. CDSS is a medical device when its intended purpose falls within Article 2(1) and the software performs an action on patient data that produces individual-patient, decision-relevant output. Pure reference tools, pure educational tools, and pure administrative tools that do not produce patient-specific decision-relevant output can remain outside the MDR. The qualification depends on the intended purpose, not on the product name.
Does clinician oversight exempt CDSS from MDR? No. MDCG 2019-11 Rev.1 is explicit that clinician review does not remove the medical device qualification of software that produces decision-relevant output. Clinician oversight is a risk control that reduces residual risk within the class. It does not change the class and it does not change the qualification.
Can CDSS be Class I under MDR Rule 11? Very rarely. Rule 11 Class I applies to software that qualifies as a medical device but does not provide information used for diagnostic or therapeutic decisions. A product named or marketed as clinical decision support is, by definition, producing information for clinical decisions — which puts it in the IIa, IIb, or III branches, not the catch-all. Assume IIa until MDCG 2019-11 Rev.1 proves otherwise.
How does AI-based CDSS differ from rule-based CDSS under the MDR? It does not, for qualification or Rule 11 classification purposes. The MDR is method-neutral. What AI changes is the lifecycle discipline, the clinical evaluation expectations, the post-market obligations around model drift and monitoring, and the transparency expected by the Notified Body. The EU AI Act adds a separate regulatory layer on top of the MDR for AI systems.
What is the difference between MDCG 2019-11 Rev.1 and MDCG 2021-24 for CDSS classification? MDCG 2019-11 Rev.1 is the software-specific guidance and provides the qualification decision tree, the Rule 11 walk-through, and the worked examples for MDSW including CDSS. MDCG 2021-24 is the general classification guidance for medical devices and places Rule 11 in the context of the other classification rules. For CDSS work, MDCG 2019-11 Rev.1 is the primary reference and MDCG 2021-24 is the broader frame.
Related reading
- What Is Software as a Medical Device (SaMD)? — the SaMD pillar covering qualification before classification.
- SaMD vs SiMD — Software as a Medical Device vs Software in a Medical Device — the embedded-versus-standalone line that precedes CDSS qualification.
- MDCG 2019-11 Rev.1 — What the Software Guidance Actually Says — the full reading of the definitive guidance document.
- How to Apply MDR Classification Rule 11: Software as a Medical Device — the spoke on Rule 11 mechanics.
- Rule 11 Decision-Making Branch — The Deep Dive — the escalation walk-through for IIa, IIb, III.
- Rule 11 Monitoring Branch — The Deep Dive — the vital-parameter branch for CDSS with monitoring functions.
- AI/ML Medical Devices Under the MDR — the AI-specific layer on top of Rule 11.
- CDSS Clinical Evaluation Expectations — what MDR Article 61 asks of a CDSS file.
- Writing a Defensible Intended Purpose for CDSS — the intended-purpose discipline for decision-support products.
- CDSS Post-Market Surveillance and Model Drift — the PMS obligations specific to CDSS and AI.
- The Subtract to Ship Framework for MDR Compliance — the methodology pillar behind the intended-purpose discipline.
Sources
- Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 2, point 1; Annex VIII, Rule 11. Official Journal L 117, 5.5.2017.
- MDCG 2019-11 — Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 — MDR and Regulation (EU) 2017/746 — IVDR. First published October 2019; Revision 1, June 2025.
- MDCG 2021-24 — Guidance on classification of medical devices, October 2021.
This post is a spoke in the Software as a Medical Device Under MDR category of the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. The MDR is the North Star for every claim in this post — MDCG 2019-11 Rev.1 and MDCG 2021-24 are the authoritative interpretations of Article 2(1) and Annex VIII Rule 11 for clinical decision support software. For startup-specific regulatory support on CDSS qualification and Rule 11 classification, Zechmeister Strategic Solutions is where this work is done in practice.