An AI-driven remote patient monitoring platform is a medical device under MDR the moment it interprets patient data to produce alerts, triage scores, or clinical recommendations. Under Annex VIII Rules 10 and 11, RPM with AI triage typically lands in Class IIa or higher, with alarm reliability and data drift as the defining lifecycle obligations.
By Tibor Zechmeister and Felix Lenhard.
TL;DR
- AI-enabled RPM that produces clinical alerts, scores, or recommendations is a medical device under MDR Article 2(1).
- Classification follows Annex VIII Rule 11 for software, and Rule 10 for active devices intended for diagnosis and monitoring where the hardware sensor side is in scope.
- Monitoring vital physiological parameters where variations could result in immediate danger is Class IIb; general monitoring software is Class IIa.
- Alarm design and notification logic are safety-critical. A missed alarm is a serious incident under MDR Article 87.
- Data drift, model degradation, and alert-threshold changes are post-market obligations, not nice-to-haves, and must be in the PMS plan under Annex III.
- The lean path is a tightly scoped intended purpose, a small patient population, and a documented lifecycle that survives the first unannounced audit.
Why this matters
A team builds an RPM platform that ingests data from a wearable, runs an AI model to score deterioration risk, and pings a care team when the score crosses a threshold. The founders think of themselves as a software company. The first hospital pilot goes well. Then a patient has a cardiac event overnight. The alert did not fire. The care team asks for the technical file. The team realises they never had one.
RPM is where three regulatory worlds collide: the sensor hardware, the software analytics, and the clinical workflow. Each one has its own rules. The AI layer adds lifecycle obligations that did not exist when RPM was just passive dashboards. This is the category where founders most often underestimate the gap between "working demo" and "CE-marked product."
The reality is that AI RPM is one of the higher-risk software categories under MDR, and the regulation treats it that way.
What MDR actually says
Article 2(1) of Regulation (EU) 2017/745 qualifies software as a medical device when its intended purpose includes monitoring, diagnosis, prediction, prognosis, or treatment. "Monitoring" is in the definition explicitly. Any RPM product that claims to monitor a clinical condition for an individual patient is inside scope.
For classification, two Annex VIII rules come into play. Rule 10 covers active devices intended for diagnosis and monitoring. Active devices intended to allow direct diagnosis or monitoring of vital physiological processes are Class IIa, except when "they are intended for monitoring of vital physiological parameters, where the nature of variations of those parameters is such that it could result in immediate danger to the patient," in which case they are Class IIb. The wearable sensor side of an RPM platform is often assessed under Rule 10 when the hardware is placed on the market by the same manufacturer.
Rule 11 covers software itself. Software intended to provide information used to take decisions with diagnostic or therapeutic purposes is Class IIa, IIb if the decisions may cause serious deterioration or surgical intervention, III if they may cause death or irreversible deterioration. Software intended to monitor physiological processes is Class IIa, and Class IIb when monitoring vital physiological parameters where variations could result in immediate danger.
MDCG 2019-11 Rev.1 clarifies how Rule 11 applies to RPM software. Software that simply stores and transmits sensor data without interpretation may not qualify as a device. Software that interprets that data to produce alerts, risk scores, or clinical recommendations does, and the class follows the severity of the decisions the information supports.
Annex I of the MDR contains the General Safety and Performance Requirements that every RPM device must meet. Section 14 covers devices with measuring function. Section 17 covers electronic programmable systems and software. Section 23 covers information supplied with the device, including alarm information. GSPR 14.1 requires devices with a measuring function to be designed so that they provide sufficient accuracy, precision, and stability for their intended purpose — a direct obligation for RPM sensor platforms.
MDR Article 83 requires a post-market surveillance system proportionate to the risk class and appropriate for the type of device. For AI RPM, PMS is not a paperwork exercise. It is the mechanism through which data drift, alarm reliability, and false-alert rates are tracked and fed back into risk management.
A worked example
Consider an RPM platform for heart failure patients. A patch measures ECG, activity, and respiration. A cloud service ingests the stream, runs an AI model, and raises a tiered alert when the model estimates elevated risk of decompensation. A care team reviews alerts within a defined window.
Under Article 2(1), this is a medical device: it monitors a clinical condition and produces information used for therapeutic decisions. Under Rule 10, the active sensor hardware is at minimum Class IIa, and because ECG-derived parameters in heart failure patients can involve variations that could result in immediate danger, a notified body will often assess the hardware side as Class IIb. Under Rule 11, the software that interprets the ECG and produces the deterioration alert is assessed for monitoring of vital physiological parameters where variations could result in immediate danger — Class IIb is the realistic landing point, and depending on the specific clinical claims, Class III has been applied.
The technical file must cover: the sensor measurement chain and its accuracy under GSPR 14.1, the EN 62304 software lifecycle for the AI component, the risk management file under EN ISO 14971, the usability engineering file under EN 62366-1 for the alert UI, the clinical evaluation linking the AI output to clinical benefit, the PMS plan with specific drift and alarm metrics, and the cybersecurity posture under EN IEC 81001-5-1:2022 because the platform is internet-connected.
Alarm design is treated as safety-critical. A missed alert for a patient who subsequently deteriorates is, in regulatory terms, a potential serious incident under MDR Article 87 and must be investigated, documented, and reported within the MDR timelines if the criteria are met. "The model's recall dropped because the population shifted" is not a defence. It is a finding.
The Subtract to Ship playbook
RPM wants to grow. More parameters, more conditions, more populations, more alerts. Subtract to Ship says the opposite: pick the smallest useful scope and make it ironclad before adding anything.
Step 1: Define a single condition, a single population, a single clinical pathway. "Adult heart failure patients in outpatient follow-up, NYHA II–III, monitored for decompensation risk, alerts routed to the treating cardiology team within 15 minutes." Specific beats broad. A specific intended purpose gives you a tractable clinical evaluation and a realistic Rule 11 decision.
Step 2: Decide classification honestly. Walk through Rule 10 for the hardware and Rule 11 for the software. If your alerts drive decisions that could cause serious harm when missed, accept Class IIb. Do not argue for IIa on the basis of "the clinician decides." The clinician decides on information your device produced.
Step 3: Build the alarm system as a named software item. Under EN 62304, the alerting subsystem — threshold logic, delivery path, acknowledgement handling, escalation — is a software item with its own requirements, architecture, verification, and integration tests. Document it as safety-critical. The failure modes are missed alerts, delayed alerts, duplicated alerts, and spurious alerts. Each is a named hazard in the risk file.
Step 4: Design for data drift from day one. Document the training data population, the deployment population, and the expected drift over time. Define the metrics you will monitor post-market: model calibration, alert rate, positive predictive value, false-negative signals from incident data. Set thresholds that trigger investigation. Under MDR Article 83, these are PMS obligations.
Step 5: Make the PMS plan specific to AI. The PMS plan under Annex III should name the drift metrics, the data sources used to compute them, the review cadence, and the decision rules for when retraining or a change notification is required. Generic PMS plans fail for AI devices because the regulators now know the specific failure modes and expect specific monitoring.
Step 6: Treat model updates as changes. Any change to training data, model weights, or threshold logic is a software change under EN 62304 and may be a significant change under MDR. A pre-authorised change control plan, where allowed by your notified body and aligned with current MDCG guidance on AI, lets you update within defined limits without triggering a new conformity assessment. Without such a plan, expect to notify the notified body.
Step 7: Build connectivity security under EN IEC 81001-5-1:2022. RPM is internet-connected. Cybersecurity is a GSPR obligation under Annex I §17.2 and §17.4, and MDCG 2019-16 Rev.1 sets the interpretation. A weak cloud path is a clinical safety issue, not an IT problem.
Step 8: Run realistic usability testing. EN 62366-1 use-related risk analysis for RPM must cover alert fatigue, handover between care-team members, off-hours response, and the "patient is on holiday" edge case. Missed alerts in real deployments usually trace back to workflow gaps the manufacturer did not test.
Reality Check
- Is your RPM intended purpose scoped to a specific condition, population, and care pathway, in writing?
- Have you walked Rule 10 and Rule 11 line by line and documented your classification reasoning?
- Is the alarm subsystem a named software item under EN 62304 with its own requirements, verification, and risk controls?
- Does your risk management file name missed alerts, delayed alerts, and spurious alerts as hazards with documented controls and residual risk assessment?
- Do you have a data drift monitoring plan with specific metrics, cadence, thresholds, and escalation rules?
- Is your PMS plan specific to AI and compliant with MDR Annex III requirements?
- Have you designed a change-control path for model updates that aligns with your notified body and current MDCG guidance on AI change management?
- Have you tested your alert workflow in realistic conditions including off-hours, handovers, and connectivity failures?
- Is your sensor hardware characterised for accuracy and stability under GSPR 14.1, with verification evidence that matches the claims you make to clinicians?
- Does your cybersecurity file under EN IEC 81001-5-1:2022 address the full data path from sensor to clinician UI?
Frequently Asked Questions
Does passive data transmission (no interpretation) count as a medical device? If the software genuinely only stores and transmits data without modifying or interpreting it, and the manufacturer claims nothing more, it may sit outside Rule 11 qualification. The moment it visualises, thresholds, aggregates clinically, or alerts, it is a device.
If the AI only scores risk and a human decides, are we off the hook? No. Rule 11 explicitly covers software that provides information used to take clinical decisions. Human review is a risk control, not a regulatory exemption.
How do we handle model updates after CE marking? Under EN 62304 and MDR, model updates are software changes. Significant changes require notified body notification. A pre-authorised change control plan — aligned with current MDCG AI guidance — can define allowable automated updates within bounds, but must be agreed with the notified body in advance.
Is a missed alert a serious incident? It can be, under MDR Article 87, if it contributed to or could have contributed to serious deterioration or death. Missed alerts must be investigated even when no harm occurred, because they indicate a potential systemic issue in a safety-critical subsystem.
Can we start with wellness and move to medical later? Only with a clean break. The wellness product must genuinely have no individual-patient medical claim. Switching intended purpose to medical later requires a full conformity assessment and a new technical file.
What about cross-border deployment — does every country add requirements? The CE mark covers the EU, but national language requirements, vigilance reporting routes, and registration obligations vary. Plan the first three markets explicitly rather than assuming "EU" is a single deployment.
Related reading
- MDR Classification Rule 11 for Software — Rule 11 full text and worked examples
- MDCG 2019-11 Software Guidance — software qualification and classification guidance
- Drift Detection for AI Medical Devices — specific techniques and regulatory expectations
- Post-Market Surveillance for AI Devices — PMS plans tailored to AI failure modes
- Wearable Medical Devices Under MDR — sensor-side requirements for wearable hardware
Sources
- Regulation (EU) 2017/745 on medical devices, consolidated text. Article 2(1), Article 83, Article 87, Annex I, Annex III, Annex VIII Rules 10 and 11.
- MDCG 2019-11 Rev.1 (June 2025) — Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745.
- MDCG 2019-16 Rev.1 (July 2020) — Guidance on Cybersecurity for medical devices.
- EN 62304:2006+A1:2015 — Medical device software lifecycle processes.
- EN ISO 14971:2019+A11:2021 — Application of risk management to medical devices.
- EN 62366-1:2015+A1:2020 — Application of usability engineering to medical devices.
- EN IEC 81001-5-1:2022 — Health software and health IT systems safety, effectiveness and security.