Reactive post-market surveillance waits for something to happen — a complaint, a return, a serious incident — and then responds. Proactive post-market surveillance goes looking for signals before anyone calls you: literature searches, registries, user surveys, similar-device monitoring, and post-market clinical follow-up. MDR Article 83(2) of Regulation (EU) 2017/745 requires manufacturers to "actively and systematically" gather, record, and analyse data on the quality, performance, and safety of the device throughout its entire lifetime. That word "actively" is what makes both halves non-negotiable. A PMS system that only runs the reactive half is not an MDR-compliant PMS system — it is a complaint desk with a label on it.

By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.


TL;DR

  • Reactive PMS responds to signals that arrive on their own: complaints, returns, incidents, service records, field safety corrective actions on your own device.
  • Proactive PMS generates signals that would not arrive otherwise: literature monitoring, registries, user surveys, similar-device monitoring, and PMCF under Annex XIV Part B.
  • MDR Article 83(2) requires both. The word "actively" in the Regulation is what rules out a complaint-only PMS system.
  • Annex III, Section 1.1 of Regulation (EU) 2017/745 names proactive sources directly, including information from similar devices on the market and publicly available state-of-the-art evaluations.
  • MDCG 2025-10 (December 2025) is the current operational guidance and describes how proactive and reactive methods work together in the PMS plan.
  • A small team can run a credible proactive programme. What it cannot do is run an elaborate one. Pick two or three proactive methods, run them on a real cadence, and document the runs.

The arm strap, revisited — why reactive alone was not enough

In the PMS pillar post we described a sleep-monitoring device with an arm strap that passed biocompatibility under EN ISO 10993-1 before launch and then started producing skin irritations in real-world use. The complaints were the reactive signal. They arrived on their own, because users were uncomfortable and said so.

Here is the part we did not belabour in that post. The pattern would have been visible earlier if the PMS system had been running a proactive literature search on textile-polymer skin contact under prolonged wear, and a proactive similar-device review on wearables with comparable materials. Neither search was running at launch. The reactive channel caught the signal eventually, and the corrective action closed the loop — but closing a loop four months after the signal first existed in the published literature is not the same as closing it the week the signal appears.

That gap — the gap between "we are waiting for users to tell us" and "we are looking for signals wherever they live" — is the gap between reactive and proactive PMS. MDR Article 83(2) closes that gap by requiring the system to be active. This post is about how you actually run the active half without pretending you have a research department.

What "reactive" means, precisely

Reactive methods process information that arrives because something went wrong, something broke, someone was unhappy, or someone was harmed. The defining characteristic is that the signal comes to you.

The main reactive sources are four:

Complaints. User complaints, healthcare professional complaints, distributor complaints, and complaints that arrive through sales channels. An intake process logs each one, a workflow assesses it, and a decision records whether it becomes a vigilance case. For the mechanics, see complaint handling under MDR for startups.

Vigilance events. Serious incidents and field safety corrective actions that trigger the reporting obligations under Articles 87 to 92. MDCG 2023-3 Rev.2 (January 2025) is the current guidance on the terms and concepts involved. Vigilance is reactive by definition — the event happened, and the clock is running.

Returns and service records. Devices that come back for repair, replacement, or destruction produce data. Every return is a signal about how the device performed in the field, even when the return was handled under warranty and nobody filed a formal complaint. Service logs are often the most under-used reactive source at startups.

Trend analysis on reactive data. Annex III, Element D directs the plan to identify any statistically significant increase in frequency or severity of incidents under Article 88. Trend analysis runs on the reactive data that has already been collected; it is the analytical step that turns a pile of complaints into a signal. For the specifics, see trend reporting under MDR Article 88.

A PMS system that only runs these four activities is running the reactive half. Competent, necessary — and insufficient on its own.

What "proactive" means, precisely

Proactive methods generate information that would not arrive on its own. The defining characteristic is that you go get it.

The main proactive sources are five:

Literature surveillance. A scheduled search of scientific and medical databases for publications relevant to the device, the technology, the clinical application, and the adverse-event profile. A literature surveillance row in the PMS plan names the databases, the search strings, the cadence, and the reviewer. The output is a log of hits, the triage decisions, and any findings that fed into the risk file or the clinical evaluation. Annex III, Section 1.1 names "publicly available information about similar devices and state-of-the-art evaluations" as a required source — literature surveillance is how that source is operationalised.

Registries. Where a device-specific or disease-specific registry exists — joint registries, implant registries, cardiovascular registries, national registries in specific Member States — the manufacturer either participates directly or reviews published registry data on a defined cadence. For many startups registries do not apply, but when they do, they are among the highest-signal proactive sources available.

User surveys. Structured surveys of users, healthcare professionals, or patients, executed on a defined cadence, with a defined instrument and a defined sample. Unlike complaints, user surveys ask everyone, not only the people who were unhappy enough to call. A well-designed survey surfaces satisfaction gaps, usability frictions, and near-misses that the complaint channel never sees.

Similar-device monitoring. Regular review of FSCAs, recalls, and published vigilance data on devices with comparable technology, comparable intended purpose, or comparable patient population. Public recall databases, competent-authority bulletins, and manufacturer field notices are the inputs. The point is not to copy a competitor's corrective action — it is to learn that a hazard the risk file had rated low has appeared in a comparable device and therefore deserves re-evaluation.

PMCF. Post-market clinical follow-up under Annex XIV Part B of the MDR. PMCF is the clinical arm of proactive PMS and has its own plan and report. The PMCF plan specifies the clinical data to be collected after placing on the market, the methods (follow-up studies, registries, targeted literature reviews, user feedback on clinical aspects), and the cadence. For the startup-specific mechanics see PMCF under MDR — a guide for startups.

Annex III, Section 1.1 requires the plan to describe the processes for collecting the data from these sources — not only the reactive complaint stream. A plan that silently skips literature, similar-device, and PMCF fails Element A and fails the "actively and systematically" standard in Article 83(2).

Why MDR expects both halves

Article 83(2) says the PMS system must actively and systematically gather, record, and analyse relevant data on the quality, performance, and safety of the device throughout its entire lifetime. Two words in that sentence do the work.

"Actively" rules out a reactive-only system. A system that only processes information when a user complains is passive, not active. The Regulation uses the word deliberately. An auditor reading a PMS plan with ten reactive rows and zero proactive rows will ask how the plan satisfies the active requirement.

"Systematically" rules out an ad-hoc system. Active does not mean "we look when we feel like it." It means scheduled, cadenced, owned, documented. Proactive sources fail at audit not because they are absent but because they are sporadic — a literature search done once, not on a cadence; a similar-device review conducted because someone remembered, not because the calendar triggered it.

Between "active" and "systematic," the Regulation draws a clean line around the kind of system it expects. Reactive methods alone fail "active." Proactive methods without cadence fail "systematic." A compliant plan runs both, on schedule, with owners.

MDCG 2025-10 (December 2025) reinforces this in the operational guidance. The guidance describes the main PMS activities — data collection, assessment, and conclusions — and treats proactive and reactive methods as parallel inputs that both feed the assessment step. A system that skips one input is a system that misses the signals that input was designed to catch.

Balancing both halves in a small team

The first objection every founder raises is real: we are six people, we cannot run a literature surveillance programme and a registry review and a PMCF study and survey our users and monitor similar devices and handle complaints. The answer is not "yes you can, if you try harder." The answer is calibration.

Run a small number of proactive methods, on a real cadence. For a low-risk Class I device, two proactive methods may be enough: literature surveillance on a quarterly cadence and similar-device monitoring on a semi-annual cadence. For a Class IIa device, add user surveys at launch plus a twelve-month post-launch window. For a Class IIb or Class III device, PMCF becomes non-optional and the depth of every other method increases.

Name the owner. Each proactive method has one owner — not "the team." At a six-person startup, that owner is typically the quality lead or the regulatory lead, and sometimes the clinical lead for PMCF-adjacent activities. The owner runs the cadence and signs the record.

Write the search strings and the databases down. Literature surveillance that does not document the exact search strings and the databases used is not a credible literature surveillance programme. The documentation is what makes the next cycle repeatable and what makes the output defensible at audit.

Budget the time explicitly. Two hours a quarter for literature surveillance. Four hours a quarter for similar-device monitoring. One structured user survey per year with a defined sample. The total is measured in days per year, not weeks per month. A small team can absorb that.

Let the reactive channel run in parallel. Reactive methods still need an intake process, a logging system, a review cadence, and an escalation rule. The proactive programme does not replace the reactive programme — it sits alongside it.

The Subtract to Ship framework for MDR applied here cuts both directions. It cuts elaborate proactive programmes that the team cannot run — the twelve-database literature search that happens once and never again, the registry participation that was promised in the plan and never executed. And it cuts reactive-only plans that use "we are small" as an excuse to skip the active half. The rule is the same one the framework always uses: trace every activity to a specific MDR obligation, and run the minimum set that satisfies every one of them.

The common gaps at audit

Auditors see the same gaps on early-stage PMS systems over and over. Five of them are worth naming explicitly.

Gap 1 — No proactive sources at all. The plan lists complaints, returns, and incident reporting. That is it. Element A is half-complete and Article 83(2) "actively" is not satisfied.

Gap 2 — Proactive sources named but not executed. The plan says "literature surveillance quarterly" but there is no evidence the search has ever been run. No log, no search strings, no hits, no review notes. The plan is aspirational.

Gap 3 — No cadence. Proactive activities are described but no frequency is named. "Literature will be monitored" is not a cadence. Annex III, Element B requires assessment methods and the cadence is part of the method.

Gap 4 — No link from proactive findings to the risk file. A literature hit that changes the occurrence estimate for a known hazard should trigger a risk-file update under EN ISO 14971:2019 + A11:2021. If the plan does not describe this loop, the proactive finding has nowhere to go.

Gap 5 — PMCF treated as optional without justification. Element I of Annex III requires a PMCF plan or a written justification for why PMCF is not applicable. Skipping both is the single most common finding on startup PMS plans. For the plan-level mechanics, see the PMS plan under MDR Annex III.

None of these are theoretical. Each of them has produced findings on real audits we have seen. Each of them is also cheap to fix in the plan before the audit happens.

The Subtract to Ship angle

Proactive PMS is where startups most often over-build and then under-execute. The fix is the subtraction test applied in both directions. For every proactive method in the plan, ask: is this required by the Regulation, is this traceable to Annex III or MDCG 2025-10, and can the team actually run it on the named cadence? If yes to all three, keep it. If the first two are yes and the third is no, either resize the cadence until the team can run it, or replace the method with a leaner one the team will actually execute. Elaborate proactive programmes that never run are worse than modest programmes that do run, because the plan says one thing and the evidence says another — and that gap is the gap the auditor walks straight into.

And for every reactive-only plan: name the proactive methods that will satisfy Article 83(2) "actively," run them on a real cadence, and document the runs. The minimum proactive programme for most startup devices is two or three methods executed on quarterly or semi-annual cycles. That is enough to defend the plan. Less is not.

Reality Check — where do you stand?

  1. Open your PMS plan. Count the data sources. How many are proactive and how many are reactive? If the proactive count is zero, Article 83(2) "actively" is not satisfied.
  2. For each proactive source in your plan, name the owner, the cadence, and the last date the activity was actually performed. Any row with no last-run date is aspirational, not operational.
  3. Do you run a literature surveillance programme with documented search strings, databases, and a review log? If not, what is your plan to start within the next quarter?
  4. Is similar-device monitoring in the plan, and when was the last time you reviewed recalls and FSCAs on comparable devices?
  5. For Element I of Annex III, do you have a PMCF plan or a written justification for why PMCF is not applicable?
  6. When a proactive finding arrives — a literature hit, a similar-device recall — does the plan describe how it feeds into the risk management file and the clinical evaluation?
  7. Have you read MDCG 2025-10 end-to-end, or only the sections that cover the reactive side?

Frequently Asked Questions

Is proactive PMS legally required under MDR?

Yes. MDR Article 83(2) of Regulation (EU) 2017/745 requires the PMS system to "actively and systematically" gather, record, and analyse relevant data. The word "actively" rules out a reactive-only system. Annex III, Section 1.1 lists proactive sources — including publicly available information about similar devices and state-of-the-art evaluations — as required content of the PMS plan.

What is the minimum proactive programme for a Class I device?

There is no fixed minimum in the Regulation, but a credible minimum for most Class I devices is two proactive methods executed on a real cadence: literature surveillance quarterly and similar-device monitoring semi-annually, both with documented search strings and owners. The plan may also need a written PMCF justification under Element I of Annex III if PMCF is not applicable.

How is proactive PMS different from PMCF?

PMCF is one specific proactive method, focused on clinical data and governed by Annex XIV Part B of the MDR. Proactive PMS is the broader category that also includes literature surveillance, similar-device monitoring, registries, and user surveys. PMCF is the clinical arm of proactive PMS, not a substitute for the other methods.

Can a small team actually run a proactive programme?

Yes, if the programme is calibrated. Two or three proactive methods on a quarterly or semi-annual cadence is typically a few days of work per year per method. What small teams cannot do is run elaborate multi-method programmes at high cadence. The Subtract to Ship rule is to pick a small number of methods the team will actually execute and document the executions.

What happens when a proactive finding arrives?

The finding flows through the assessment method named in Element B of the plan, into the risk management file under EN ISO 14971:2019 + A11:2021, and into the clinical evaluation update cycle where relevant. The plan must describe this path explicitly. A proactive finding with no documented destination is a finding that will be lost.

Sources

  1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 83 (post-market surveillance system of the manufacturer, including Article 83(2) on active and systematic data gathering), Article 84 (post-market surveillance plan), Annex III (technical documentation on post-market surveillance, Section 1.1), and Annex XIV Part B (post-market clinical follow-up). Official Journal L 117, 5.5.2017.
  2. MDCG 2025-10 — Guidance on post-market surveillance of medical devices and in vitro diagnostic medical devices. Medical Device Coordination Group, December 2025.
  3. EN ISO 14971:2019 + A11:2021 — Medical devices — Application of risk management to medical devices.

This post is a deep dive in the Post-Market Surveillance & Vigilance series of the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. A PMS system that only runs the reactive half is a complaint desk with a label on it. The active half is where the signals live that nobody has called about yet — and catching those signals before they become complaints is the whole point of the post-market obligation.