A predictive analytics product becomes a medical device the moment its intended purpose includes diagnosis, prediction, prognosis, or treatment decisions for individual patients. Under MDR Annex VIII Rule 11, almost any such software lands in Class IIa or higher. The dividing line is not the math. It is the manufacturer's claim.

By Tibor Zechmeister and Felix Lenhard.

TL;DR

  • MDR Article 2(1) defines a medical device by its intended purpose, not its technology. A predictive model qualifies when the manufacturer claims a medical use.
  • Article 2(12) fixes intended purpose to what the manufacturer states in labelling, IFU, promotional materials, and the clinical evaluation.
  • MDCG 2019-11 Rev.1 confirms that software driving decisions about individual patients is a medical device, including predictive models.
  • Annex VIII Rule 11 pushes most predictive software to Class IIa, IIb, or III depending on the severity of the information it provides.
  • Output framing matters. "Prediction as information" and "prediction as decision support" both qualify once a medical intended purpose is claimed.
  • The only safe Class I path for analytics is a product with no individual-patient medical intended purpose at all.

Why this matters

A founder walks into the first regulatory meeting with a slide deck full of ROC curves and a pitch that begins "we predict sepsis 6 hours earlier." The slide after that explains the go-to-market plan: sell into ICUs by the end of the year. They think they have an analytics product. They have a Class IIa medical device, at minimum, and they have not started a technical file.

This is the most common trap in the predictive analytics corner of MedTech. The team builds a statistical model on retrospective data, calls it "decision support" or "analytics," and assumes that because the clinician "makes the final call" the software is somehow outside the MDR. That framing is wrong, and notified bodies have been rejecting it for years.

The regulation does not care about statistics. It cares about what the manufacturer claims the software does.

What MDR actually says

Article 2(1) of Regulation (EU) 2017/745 defines a medical device as any "instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes: diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of disease..." The word "prediction" is in the definition itself. The MDR added it explicitly in the 2017 text, closing the door on the argument that forecasting future clinical events sits outside regulation.

Article 2(12) defines intended purpose as "the use for which a device is intended according to the data supplied by the manufacturer on the label, in the instructions for use or in promotional or sales materials or statements and as specified by the manufacturer in the clinical evaluation." This is the single most load-bearing article for predictive analytics founders. Your website copy, your sales decks, your LinkedIn posts, and your CER all count as intended purpose evidence.

MDCG 2019-11 Rev.1 (June 2025 revision) is the guidance every software manufacturer has to read. It qualifies software as a medical device when it performs an action on data beyond storage, archival, communication, or simple search, and the action is for the benefit of an individual patient. A risk score, a deterioration alert, a readmission prediction, a length-of-stay forecast targeted at an individual patient — all qualify.

Annex VIII Rule 11 then decides the class. Software intended to provide information used to take decisions with diagnosis or therapeutic purposes is Class IIa. If those decisions may cause death or irreversible deterioration, it is Class III. If they may cause serious deterioration or surgical intervention, Class IIb. Software intended to monitor physiological processes is Class IIa, or Class IIb when monitoring vital physiological parameters where variations could result in immediate danger.

Read Rule 11 slowly and you see the cascade: once software qualifies as a device under Article 2(1), the floor is almost always Class IIa. Class I software under Rule 11 exists, but only when the software does not drive decisions and does not monitor physiological processes — effectively, when it has no medical intended purpose that creates risk.

A worked example

Consider two products built on the same underlying model. Both use vital signs, labs, and nursing notes to output a single number between 0 and 100 representing probability of clinical deterioration within 12 hours.

Product A — "Hospital Ops Analytics Dashboard." Intended purpose on the label: "provides aggregated ward-level trend statistics for hospital operations planning. Not intended for individual patient assessment, diagnosis, or treatment decisions." The IFU repeats this. The sales deck talks about bed turnover and staffing. There is no per-patient alert. The product shows ward-level heatmaps, never patient-level scores. This is not a medical device. It does not claim a medical purpose for individual patients.

Product B — "Deterioration Early Warning System." Intended purpose: "predicts the probability of clinical deterioration within 12 hours for adult inpatients, to support clinical decision-making by ICU staff." The same model, but now the output is per-patient, the claim is prediction, the user is a clinician making a treatment decision. This is a medical device under Article 2(1). Under Rule 11, it provides information used to take decisions with therapeutic purposes, and because deterioration events may cause death or irreversible harm if missed, a notified body will almost certainly push this to Class IIb or Class III, not IIa.

Same math. Different intended purpose. Different regulatory universe. One needs a declaration of conformity. The other needs a notified body, a clinical investigation strategy, and a technical file that can withstand Annex IX scrutiny.

Founders sometimes try to split the difference. "We'll sell Product B but claim Product A's intended purpose." MDCG 2019-11 Rev.1 explicitly addresses this. The assessment is based on what the software actually does, what the user interface presents, and what promotional materials say — not just the IFU. A claim of "operations analytics" on a product that shows per-patient risk scores to ICU clinicians will not survive contact with a notified body reviewer or a market surveillance inspector.

The Subtract to Ship playbook

The temptation with predictive models is to add features. More predictions, more targets, more clinical conditions. Subtract to Ship says: decide what your intended purpose is first, then cut everything that does not serve that single claim.

Step 1: Write the intended purpose statement before you write code. One paragraph. What clinical question does the model answer, for which patient population, used by which clinician, in which care setting, to support which decision? If you cannot write this paragraph, you do not have a product yet. You have a dataset.

Step 2: Decide the Rule 11 class on paper, before the first sprint. Take your intended purpose statement and walk through Annex VIII Rule 11 line by line. If your prediction feeds therapeutic decisions that could cause death or irreversible harm if wrong, accept that you are building a Class III device and plan the budget. Do not build first and hope for IIa later.

Step 3: Align every piece of external communication with the intended purpose. Website, pitch deck, demo videos, conference posters, LinkedIn posts from the CEO, the grant application. Under Article 2(12), all of this is evidence of intended purpose. One enthusiastic tweet saying "our AI diagnoses sepsis" can reclassify your whole product. Founders lose entire quarters to this.

Step 4: Separate the non-medical product from the medical product if you need both. If you want a hospital-ops dashboard and a patient-level predictor, they are two products with two intended purposes, two technical files, two sets of labelling. Trying to ship them as one product with "modes" creates a classification mess.

Step 5: Build on EN 62304 and EN ISO 14971 from day one. MDR Annex I §17.2 requires software lifecycle processes consistent with EN 62304. Predictive models are software. The training pipeline, the inference service, the monitoring layer — all in scope. Risk management under EN ISO 14971 is mandatory and must address the specific failure modes of statistical prediction: distribution shift, miscalibration, silent degradation.

Step 6: Do not confuse "clinician in the loop" with "not a medical device." The clinician making the final call does not remove your regulatory obligation. Clinical decision support is in scope when the software provides information used to take decisions. Rule 11 was written for exactly this situation.

Reality Check

  1. Can you state your intended purpose in one paragraph, naming the clinical question, the patient population, the user, and the decision the output supports?
  2. Does every one of your promotional materials, including founder social media, match that intended purpose statement word for word in its claims?
  3. Have you walked through Annex VIII Rule 11 with your intended purpose in hand and written down the class you land in?
  4. If your output is a probability, a score, or an alert targeted at an individual patient, have you accepted that you are building a medical device?
  5. Do you have a EN 62304-compliant software lifecycle plan, or are you still running training scripts in notebooks with no version control?
  6. Have you built a risk management file under EN ISO 14971 that explicitly addresses model failure modes: false negatives, false positives, distribution drift, miscalibration, automation bias?
  7. If a market surveillance inspector showed up tomorrow, could you defend your classification with documented reasoning?
  8. Have you separated any non-medical analytics product into a clearly distinct offering with its own intended purpose and its own UI?

Frequently Asked Questions

If a clinician always reviews the prediction, is the software still a medical device? Yes. Rule 11 explicitly covers software that provides information used to take decisions. The presence of a human in the loop does not remove the regulation. It only affects the risk analysis and the usability engineering under EN 62366-1.

Can I call my product "analytics" and avoid MDR? Only if the intended purpose genuinely has no individual-patient medical claim. The label does not drive the classification — the claimed use does. Article 2(12) makes all promotional material part of the intended purpose.

Our model is retrospective research. When does that become a device? The moment you claim it is intended to be used on current patients for a medical purpose, or you place it on the EU market with that intent. Research tools are outside MDR. Products are inside.

Is population-level prediction (hospital, ward) a medical device? Generally no, when the output is not attributable to and not used for individual patients. But the moment you add a per-patient drill-down or alerting, the qualification changes.

Does the model have to be AI or ML to fall under Rule 11? No. A regression model, a rule-based scoring system, or even a decision tree counts. Rule 11 is about software intended purpose, not algorithm type.

Can I start Class IIa and move to Class III later? You can change classification, but that means a new conformity assessment, potentially a new notified body procedure, and a gap analysis on the technical file. Starting with the honest class saves time.

Sources

  1. Regulation (EU) 2017/745 on medical devices, consolidated text. Article 2(1), Article 2(12), Annex VIII Rule 11.
  2. MDCG 2019-11 Rev.1 (June 2025) — Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745.
  3. EN 62304:2006+A1:2015 — Medical device software — Software lifecycle processes.
  4. EN ISO 14971:2019+A11:2021 — Application of risk management to medical devices.