Post-market surveillance for Software as a Medical Device has to satisfy the same MDR Article 83 to 86 obligations as any other device, but the operational content is different because SaMD fails in different ways. A classical PMS system built around complaint intake will not catch a version-specific regression, a telemetry anomaly, or a cybersecurity incident that never produces a user complaint. A PMS system designed for SaMD treats telemetry as a primary data source, tracks performance per deployed version, integrates cybersecurity events as safety signals, and ties everything back to the software maintenance process under EN 62304:2006+A1:2015. Under MDCG 2025-10 (December 2025), "appropriate for the type of device" means exactly this for software.

By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.


TL;DR

  • MDR Articles 83 to 86 apply identically to SaMD and hardware devices. The PMS system must be proactive, proportionate to the risk class, and appropriate for the type of device. For SaMD, "appropriate" means telemetry, version performance, and cybersecurity events are inside the plan.
  • SaMD fails in ways classical PMS does not watch for: a regression tied to one deployed version, a silent error rate shift after a backend change, a vulnerability disclosure that affects patient safety without producing a complaint.
  • Telemetry is the primary data source for SaMD PMS when it can be collected lawfully and proportionately. The PMS plan has to name the telemetry signals, the collection method, the analysis cadence, and the escalation thresholds.
  • Version performance has to be tracked per released version of the software, not only in aggregate. A bug introduced in v2.3 is invisible if the PMS system only sees totals across all versions.
  • Cybersecurity events — incidents, vulnerabilities, and relevant disclosures — are PMS signals under MDCG 2019-16 Rev.1, not a separate track. EN IEC 81001-5-1:2022 specifies the security activities across the product lifecycle, including post-market.
  • The PMS findings feed EN 62304:2006+A1:2015 software maintenance, EN ISO 14971:2019+A11:2021 risk management, and the clinical evaluation. A plan that does not close these loops is a plan that will not pass a serious Notified Body review.

Why SaMD PMS is different

Classical PMS is built around events a user notices. A sterilisation failure. A mechanical defect. A skin irritation from an arm strap. The pillar post on post-market surveillance under MDR walks through the baseline Articles 83 to 86 framework that applies to every device class, and the arm-strap story there is the canonical example of complaint-driven PMS doing its job.

SaMD fails differently. A regression slips into v2.3 of a diagnostic algorithm and the false-negative rate rises by two percentage points. No user complains, because no individual user sees enough cases to notice a statistical shift. A backend change to a third-party dependency alters how timestamps are parsed, and a small fraction of records are now misaligned. No user complains, because the misalignment is silent until someone audits the data. A cybersecurity vulnerability is disclosed in an upstream library the software depends on, and the risk to patients is real but no incident has yet occurred. No user complains, because the risk is latent.

Every one of these is a PMS signal the Regulation expects a SaMD manufacturer to surface. Article 83 requires the PMS system to be "appropriate for the type of device." A complaint inbox is not appropriate for a type of device whose primary failure modes are invisible to individual users. The SaMD PMS plan has to compensate for the gap by instrumenting the product itself and treating the instrumentation as the primary sensor.

This is the defining operational difference for SaMD PMS. The signal comes from the running software, not only from the users. For background on what SaMD is and how it is classified, see what is Software as a Medical Device under MDR.

Telemetry as a data source

Telemetry is the structured collection of operational signals from a running SaMD deployment. Used correctly, it is the most powerful PMS data source available for software. Used carelessly, it creates a data-protection and proportionality problem that can undo the regulatory benefit.

The PMS plan under Annex III has to name, for each telemetry signal, what is collected, how it is collected, where it is stored, how it is analysed, on what cadence, and what threshold triggers action. "We log errors" is a placeholder. "The frontend emits a structured event for every prediction request with anonymised identifiers, model version, input validation status, response latency, and outcome class; events are aggregated daily, metric thresholds trigger a CAPA entry when breached" is a specification.

The useful telemetry categories for most SaMD products fall into a short list. Error and exception rates broken down by module and by deployed version. Response latency distributions, because latency regressions can degrade clinical workflow without producing obvious failures. Input validation failures, because a rise in malformed inputs usually means something upstream has changed. Output distribution statistics, because a shift in predicted class frequencies is a real-world signal even if no user flags it. Feature usage metrics, because features that stop being used are features the workflow has routed around, which is itself a safety-relevant signal. And authentication and access anomalies, because those cross into cybersecurity.

The constraints are real. Telemetry collection has to respect GDPR and the device's data-protection commitments. The plan cannot collect personal data it does not need, cannot retain data beyond what the purpose justifies, and cannot hide collection from users. The proportionality test runs in parallel with the PMS rigor test: collect enough to catch the failure modes, not more. For on-premises SaMD where telemetry cannot be sent to the manufacturer, the plan needs an alternative — structured error reports, periodic summary exports, or contractual data-sharing with customer sites. No telemetry at all is a gap the PMS plan has to address with other data sources, not a gap the plan is allowed to ignore.

For the related discussion on what Articles 83 to 86 require structurally, see MDR Articles 83 to 86 — the PMS framework explained.

Version performance tracking

A SaMD product is never a single thing in the field. It is a set of versions deployed across customers, some of them on the latest release, some several releases behind, some on a customer-pinned build. A PMS system that reports "we saw 14 issues last quarter" without tying each issue to a version is reporting noise. Real SaMD PMS tracks performance per deployed version.

The PMS plan has to specify how version information enters the data. Each telemetry event carries the software version. Each complaint intake form captures the version the user is running. Each issue assessment traces the failure to a specific version range. The PMS Report under Article 85 for Class I devices, or the PSUR under Article 86 for Class IIa, IIb, and III devices, reflects version-specific findings — not only aggregates.

The point of version tracking is to make regressions visible. If error rates for module X were stable for six months and rose sharply the week after v2.3 shipped, v2.3 introduced a regression. That finding has to flow into change management and the software maintenance process under EN 62304:2006+A1:2015, and the risk file has to be reassessed under EN ISO 14971:2019+A11:2021 if the regression changes the hazard profile. Without per-version data, the signal is invisible and the system learns nothing from the release.

Version tracking also matters for the fleet. If 40% of customers are still on a version with a known defect, the PMS system has to know. The response might be an update push, a field safety notice, a change to the IFU, or in serious cases a field safety corrective action that triggers vigilance reporting under Articles 87 to 92. Fleet-level visibility is a PMS responsibility.

For the companion discussion on how SaMD classification under Annex VIII Rule 11 shapes these obligations, see what is Software as a Medical Device under MDR.

Cybersecurity events as PMS signals

MDCG 2019-16 Rev.1 on cybersecurity for medical devices maps MDR Annex I Section 14, 17, and 18 to cybersecurity topics, and the post-market chapter of that guidance is clear: cybersecurity events are PMS signals. Incidents, vulnerability disclosures, and relevant threat intelligence all feed into the same Article 83 PMS system, not into a parallel track that reports to a different owner.

EN IEC 81001-5-1:2022 specifies security activities across the product lifecycle for health software and health IT systems. The post-market activities include vulnerability monitoring, incident handling, and communication with users and competent authorities where required. For a SaMD manufacturer, these activities are not optional and they are not adjacent to PMS — they are part of it.

The concrete implications for the PMS plan. Vulnerabilities disclosed in software components the product depends on — direct dependencies, transitive dependencies, runtime components — have to be monitored. A defined cadence, a defined source list, a defined assessment process when a vulnerability is flagged. Security incidents on the deployed product are logged, assessed for patient-safety impact, and escalated into vigilance reporting where the thresholds under Articles 87 to 92 are met. Relevant threat intelligence — information about attack patterns that affect similar products — feeds into the risk file and into change-management decisions.

The failure mode is compartmentalisation. Engineering owns the CVE feed. Security owns the incident response. Regulatory owns the PMS plan. The three never talk. A vulnerability is disclosed, patched in engineering, and never enters the regulatory record. Then the Notified Body asks during a surveillance audit how the PMS system handles cybersecurity, and the manufacturer cannot produce the traceability. The fix is to wire the cybersecurity pipeline into the PMS plan explicitly, with a single traceability record.

For the deeper cybersecurity treatment, see MDCG 2019-16 and cybersecurity PMS for medical devices.

User feedback channels

Telemetry does not replace users. It complements them. A SaMD PMS plan still needs structured channels for users — clinicians, patients where relevant, and administrators — to report problems, ask questions, and flag concerns. The channels have to be obvious, reachable without friction, and monitored on a defined cadence.

For most SaMD products, the useful channels include an in-product feedback control that carries the relevant version and context automatically, a support email address that routes into the complaint intake system, a customer success contact for enterprise deployments where problems often surface through relationship channels, and a documented route for clinicians to report suspected incidents. Each channel has a named owner, a defined response time, and a logging process that feeds into the same complaint database telemetry-derived events feed into.

The mistake to avoid is treating user feedback as a separate silo from telemetry. A telemetry anomaly and a user complaint about the same behaviour are the same signal from two angles. The PMS plan should correlate them automatically — when a user reports an issue on v2.3, the analysis includes the telemetry for v2.3 around the same period — and the findings should be assessed together. Two separate tracks that never meet is how regressions get missed.

Integration with EN 62304 maintenance

EN 62304:2006+A1:2015 defines the software life-cycle processes for medical device software, including the software maintenance process. Maintenance is explicitly a life-cycle activity, not a post-market afterthought, and the standard specifies how problem reports are handled, how change requests are managed, how risk analysis is updated for changes, and how verification and validation are performed for modifications.

The integration point with PMS is precise. A problem report from the PMS system — a complaint, a telemetry anomaly, a cybersecurity event — enters the EN 62304:2006+A1:2015 problem resolution process. The problem is classified, the risk is assessed against the risk management file maintained under EN ISO 14971:2019+A11:2021, a change request is raised if a fix is needed, the change is implemented under configuration management, verification and validation are performed at the appropriate software safety class, and the fix is released. The PMS Report under Article 85 or the PSUR under Article 86 reflects the cycle: problem found, analysed, acted on, verified.

A PMS plan that does not name the EN 62304:2006+A1:2015 maintenance process as the handling pathway for software problem reports is a PMS plan that has not connected its outputs to the engineering reality of the product. This is the single most common integration gap in SaMD PMS files we review. Problems accumulate in the PMS database, never convert into EN 62304:2006+A1:2015 change requests, and the risk file drifts out of sync with the code. Notified Body auditors notice.

The feedback loop also runs forward. Changes raised through the maintenance process are assessed for regulatory impact. Significant changes trigger change notification to the Notified Body. Non-significant changes are logged under the QMS change-management process and reflected in the next PMS Report or PSUR cycle. The loop is closed by making sure every PMS finding has a deterministic path into maintenance, and every maintenance change has a deterministic path into the PMS record.

Common mistakes

The patterns we see most often when reviewing SaMD PMS plans:

  • No telemetry and no substitute. The plan assumes users will report what goes wrong, which is not how SaMD fails.
  • Telemetry without analysis cadence. The data is collected, stored, and never reviewed.
  • Aggregate-only metrics with no version breakdown. Regressions tied to specific releases are invisible.
  • Cybersecurity events tracked in a separate system that never enters the PMS record.
  • No link between PMS findings and the EN 62304:2006+A1:2015 maintenance process. Problems and changes run on different rails.
  • Thresholds described with adjectives instead of numbers. "A significant rise in errors" is not a threshold.
  • Complaint intake that does not capture the software version the user was running.
  • PMCF treated as inapplicable because "it is software," without the documented justification required by Annex III.

The Subtract to Ship angle

The Subtract to Ship framework for MDR applied to SaMD PMS produces the same discipline it produces everywhere else: build the smallest system that satisfies every Article 83 to 86 obligation for the device class AND catches the failure modes this specific software is exposed to. Everything beyond is waste. Everything less is a non-conformity.

For a Class IIa SaMD at small-team scale, the lean PMS looks like this. One telemetry pipeline with a named set of signals and a daily or weekly analysis cadence owned by a specific person. One complaint intake channel that captures the software version. One version performance dashboard reviewed on a defined cadence. One cybersecurity monitoring feed tied into the same complaint system. One PMS plan that names every Annex III element with the SaMD-specific monitoring methods in each section. One PMCF plan under Annex XIV Part B, or a documented justification for why it does not apply. One PSUR under Article 86 on the class-specific cadence, reflecting real telemetry and version data from the period. One feedback loop into the EN 62304:2006+A1:2015 maintenance process and into the risk management file under EN ISO 14971:2019+A11:2021.

That is lean. It is not minimal in the sense of skimping. Every element traces to a specific MDR article, a specific MDCG 2025-10 section, or a specific software-safety control in the risk file. What it does not include: dashboards nobody opens, PMS reports that copy the previous quarter, process documents describing activities the team does not actually perform, or elaborate monitoring surfaces built to impress auditors rather than catch regressions.

Reality Check — Where do you stand?

  1. Does your PMS plan name the telemetry signals you collect, with the collection method, analysis cadence, and escalation thresholds for each?
  2. Is every signal — telemetry, complaint, cybersecurity event — tied to the software version that was running when the signal was generated?
  3. Can you produce a per-version performance view of your product on request, or do you only have aggregate totals?
  4. Are cybersecurity vulnerability disclosures and incidents flowing into the same PMS record as complaints, or do they sit in a separate engineering-owned system?
  5. Is there a deterministic, documented path from every PMS finding into the EN 62304:2006+A1:2015 software maintenance process?
  6. Does your PMS plan include a PMCF plan under Annex XIV Part B, or a documented justification for why PMCF does not apply to your software?
  7. Does your complaint intake form capture the software version the user was running when the issue occurred?
  8. If a notified body asked for your last PSUR under Article 86 tomorrow, would it reflect real version and telemetry data, or would it be a copy-paste of the previous cycle?
  9. Have you read MDCG 2025-10 (December 2025) and MDCG 2019-16 Rev.1 end-to-end, or only skimmed them?

Frequently Asked Questions

Does MDR require telemetry for SaMD? Not in those words. MDR Article 83 requires a PMS system proportionate to the risk class and appropriate for the type of device. For SaMD whose primary failure modes are invisible to individual users, "appropriate" points toward instrumented data collection — telemetry where it can be collected lawfully, structured error reports or summary exports where it cannot. A PMS system that cannot see the software's actual behaviour in the field is not appropriate for SaMD.

Can I rely on complaint intake alone for a SaMD PMS system? Only if your product has no realistic way to collect operational data and you can justify that constraint in the plan. For most cloud or hybrid SaMD, complaint-only PMS is insufficient because the software's most common failure modes do not produce complaints. Annex III expects the plan to describe methods appropriate to the device; "we wait for users to complain" is not an appropriate method for software.

How does cybersecurity PMS relate to PMS under Article 83? Cybersecurity events are PMS signals under MDCG 2019-16 Rev.1 and post-market activities are specified in EN IEC 81001-5-1:2022. The Article 83 PMS system is the place where cybersecurity incidents, vulnerabilities, and threat intelligence are collected, assessed, and acted on, alongside other PMS data. They are not a parallel regime.

What is the difference between PMS for SaMD and PMS for an AI medical device? A lot of the mechanics overlap — telemetry, version tracking, reference evaluation — but AI devices have an additional failure mode: silent drift when the input distribution shifts even though the model has not changed. Post-market surveillance for AI devices covers the AI-specific drift detection layer that extends the SaMD PMS framework described here.

How does version tracking interact with change control? Every new version is a change, and significant changes trigger notification to the Notified Body under the change-management framework. The PMS system tracks the performance of each version in the field; change control decides when a change warrants notification. The two are different processes that share a common data source — the version record — and both have to be consistent with each other.

Is PMCF required for SaMD? PMCF under Annex XIV Part B is required unless the manufacturer provides a documented justification for why it does not apply. For most SaMD at Class IIa and above, a clinical follow-up plan is expected, because clinical performance in the intended use population has to be confirmed in real-world use — not only at certification. "It is software" is not a justification on its own.

How often should the SaMD PMS findings be reviewed? The cadence depends on the risk class and the failure-mode profile. Monthly telemetry review is common for active SaMD products. The PMS Report or PSUR is updated on the class-specific cadence set by Articles 85 and 86. The plan specifies the review cadence, the risk file justifies it, and the team actually executes it.

Sources

  1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices — Article 83 (post-market surveillance system), Article 84 (post-market surveillance plan), Article 85 (PMS Report for Class I), Article 86 (PSUR for Class IIa, IIb, III), Annex I Section 17 (software lifecycle requirements), Annex III (technical documentation on post-market surveillance), Annex XIV Part B (post-market clinical follow-up). Official Journal L 117, 5.5.2017.
  2. MDCG 2025-10 — Guidance on post-market surveillance of medical devices and in vitro diagnostic medical devices. Medical Device Coordination Group, December 2025.
  3. MDCG 2019-11 Rev.1 — Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 — MDR and Regulation (EU) 2017/746 — IVDR. First publication October 2019; Revision 1, June 2025.
  4. MDCG 2019-16 Rev.1 — Guidance on Cybersecurity for medical devices. Medical Device Coordination Group, December 2019; Revision 1, July 2020.
  5. EN 62304:2006+A1:2015 — Medical device software — Software life-cycle processes (IEC 62304:2006 + IEC 62304:2006/A1:2015).
  6. EN ISO 14971:2019 + A11:2021 — Medical devices — Application of risk management to medical devices.
  7. EN IEC 81001-5-1:2022 — Health software and health IT systems safety, effectiveness and security — Part 5-1: Security — Activities in the product life cycle.

This post is part of the Post-Market Surveillance and Vigilance series in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. SaMD is where the gap between a generic PMS template and a PMS system that actually catches software failures is widest, and it is exactly where a sparring partner who has walked other SaMD founders through the same design earns their keep.