A PMS plan under MDR Annex III is the document that describes how a manufacturer's post-market surveillance system actually runs for a specific device. Section 1.1 of Annex III to Regulation (EU) 2017/745 fixes the mandatory content: data sources, assessment methods, trend-reporting protocols under Article 88, communication procedures, corrective-action triggers, device-traceability tools, and a PMCF plan or a justification for why PMCF is not applicable. The plan is part of the technical documentation, it is required for every class of device, and it must be proportionate to the risk class. The Regulation does not prescribe a template. It prescribes content, and the auditor checks the content.

By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.


TL;DR

  • The PMS plan is required by MDR Article 84 and its mandatory content is fixed in Annex III, Section 1.1 of Regulation (EU) 2017/745.
  • Annex III, Section 1.1 lists the elements the plan must address: data sources, assessment methods, trend-reporting under Article 88, communication channels, CAPA triggers, traceability tools, and a PMCF plan or justification.
  • The plan must cover both proactive sources (literature, similar-device monitoring, PMCF, user feedback) and reactive sources (complaints, incidents, FSCAs, returns, service records).
  • The plan must scale to the risk class: a Class I reusable instrument does not run the same architecture as a Class III implantable, but both plans address every Annex III element.
  • MDCG 2025-10 (December 2025) is the current operational guidance that notified bodies apply when reviewing a PMS plan.
  • The plan is not a standalone document. It is linked directly into the risk management file under EN ISO 14971:2019 + A11:2021 and into the clinical evaluation update cycle.

Why this document matters more than the template you copied

Every consultancy in Europe has a PMS plan template. Every first-time founder downloads one. Almost all of those templates were built for a Class III implantable, then lightly edited for a Class IIa or Class I device, and most of the edits were cosmetic. The result is a twenty-page document that looks thorough, that the team cannot realistically execute, and that still manages to miss one or two Annex III elements because the template predates MDCG 2025-10.

The arm-strap sleep-monitoring device we wrote about in the PMS pillar post is the context. The skin-irritation pattern was caught because the PMS plan for that device named the right data sources, ran the right assessment cadence, and connected the findings to the risk file. If the plan had been the downloaded template nobody read, the pattern would have surfaced months later. After more patients, more complaints, and a notified body asking why the trend was missed. The plan is not paperwork. The plan is the document that decides whether your PMS system catches the signal or misses it.

This post walks Annex III, Section 1.1 element by element. It tells you what each element is, what a lean version looks like, and what auditors check. It assumes you have read MDR Articles 83 to 86. The PMS framework explained. If you have not, read that first. It gives the four-article backbone this plan sits inside.

Step 1. Walk Annex III, Section 1.1 element by element

Annex III of Regulation (EU) 2017/745 is the section of the technical documentation that covers post-market surveillance. Section 1.1 is the PMS plan specification. It names the elements the plan must address. These are the elements an auditor will look for directly, in this order, when they open the plan.

Element A. Data collection sources. The plan describes the processes for collecting and using the available information, in particular complaints and reports from healthcare professionals, patients, and users on their experience with the device; information from similar devices on the market; publicly available information and state-of-the-art evaluations; and data on serious incidents, including information from Periodic Safety Update Reports and field safety corrective actions.

Element B. Assessment methods. The plan sets out effective and appropriate methods and processes to assess the collected data. This is not "we will review complaints." It is "we will classify each complaint against a defined taxonomy, compare it against the pre-market hazard analysis, and apply the following analytical method."

Element C. Tools to investigate complaints and analyse field experience. The plan describes the tools actually used. The intake system, the investigation workflow, the trending approach, the documentation.

Element D. Trend-reporting protocols under Article 88. The plan specifies the methods and protocols used to manage events subject to trend reporting under Article 88, including the methods and protocols to identify any statistically significant increase in the frequency or severity of incidents, as well as the observation period.

Element E. Communication channels. The plan describes the methods and protocols for communicating effectively with competent authorities, notified bodies, economic operators, and users.

Element F. Reference to the procedures that fulfil Article 83, 84, and 86 obligations. The plan points explicitly to the procedures that implement the manufacturer's obligations laid down in Articles 83, 84, and 86.

Element G. Systematic procedures for corrective actions. The plan sets out systematic procedures to identify and initiate appropriate measures, including corrective actions.

Element H. Traceability tools. The plan describes effective tools to trace and identify devices for which corrective actions might be necessary.

Element I. PMCF plan or justification. The plan includes a PMCF plan referred to in Part B of Annex XIV, or a justification as to why a PMCF is not applicable.

Nine elements. The audit checklist is nine long. A PMS plan that addresses eight out of nine and silently skips the ninth is a PMS plan that generates a finding at audit. The consistently most-skipped element at startups is Element I. Either because the team assumed PMCF was not applicable without writing the justification down, or because they wrote the PMCF plan as a separate document and forgot to reference it from the PMS plan.

Step 2. Write each element as a table row, not a chapter

The Regulation prescribes content, not format. The leanest plans we see at startups are tables. One row per Annex III element. Each row names the data source or the method, the frequency, the owner, and the document reference that proves the activity actually runs. The table is three to eight pages. Everything else is supporting SOPs, linked. Not reproduced. Inside the plan.

A table structure forces discipline. When you try to fill in "frequency" for Element D (trend reporting), you are forced to name an observation period. When you try to fill in "owner" for Element C (complaint investigation tools), you are forced to name a person with a role. When you try to fill in "document reference" for Element I (PMCF plan), you are forced to either point to the PMCF plan or write the justification. Templates that lay out Annex III as prose paragraphs let the writer skate past the gaps. Tables do not.

This is not a style preference. It is the fastest path to a plan that passes Annex III without bloating the tech file with content nobody will maintain. For the broader logic, see the Subtract to Ship framework for MDR compliance.

Step 3. Split data sources into proactive and reactive

The single most common drafting mistake on Annex III Element A is lumping every data source into one bucket called "complaints and feedback." Annex III itself distinguishes between several kinds of information, and the distinction matters because proactive sources behave differently from reactive sources in how they are run, who owns them, and when they produce findings.

Reactive sources are the ones that arrive because something happened. Complaints from users and healthcare professionals. Returns. Service records. Incidents and serious incidents that trigger vigilance reporting. Field safety corrective actions on other devices. These sources are managed by an intake process, a logging system, and an investigation workflow. They are the sources startups think of first because they are the most visible.

Proactive sources are the ones that only exist because the plan says to look. Literature monitoring against a defined search string on a defined cadence. Similar-device monitoring. Reviewing public information, recalls, and FSCAs on devices comparable to yours. Registry data where available. User surveys executed on a schedule. PMCF activities under Annex XIV Part B. State-of-the-art reviews. These sources do not arrive; they are actively pulled. A PMS plan that only names reactive sources is a PMS plan that fails the "actively and systematically gathering" test in Article 83(2).

The table structure from Step 2 makes this split visible immediately. Every row is tagged proactive or reactive. If your table has ten reactive rows and one proactive row, you have a reactive-only system dressed as a PMS plan. The Regulation wants both.

Step 4. Proportion the plan to the risk class

Article 83 requires the PMS system to be proportionate to the risk class and appropriate for the type of device. Proportionality applies to the plan, not just the system. A proportionate plan scales four things.

Cadence. The observation periods and review frequencies in the plan scale with the class. A Class I reusable instrument with a ten-year post-market history can run quarterly reviews of a small complaint stream. A Class III active implantable runs continuous monitoring and monthly or shorter analysis cycles. The plan states the cadence explicitly; "periodically" is not a cadence.

Depth of data sources. A Class I non-invasive device may rely on complaints, general literature monitoring, and user feedback. A Class IIb device typically adds similar-device monitoring, targeted literature searches, and structured PMCF. A Class III implantable adds registries, structured follow-up studies, device-specific databases, and usually a formal PMCF study. The plan names the sources appropriate to the class, not a generic list copied from a template.

Analytical rigour. Element B requires assessment methods. For a Class I device with low complaint volume, the method can be a documented classification and manual trending. For a Class III device with meaningful complaint volume, the method typically involves statistical trending rules with defined thresholds. The plan states the method; it does not wave at "statistical analysis" without naming the approach.

PMCF scope. Element I requires a PMCF plan or a justification for why PMCF is not applicable. For a Class III or implantable device, "not applicable" is rarely the right answer and the plan typically describes a substantive PMCF activity. For a Class I device, a documented justification may be appropriate. But the justification itself must exist in writing.

Proportionality is a calibration rule, not an escape hatch. Every Annex III element is addressed for every class. What changes is how much depth, what cadence, and what analytical method. For the class-by-class mechanics of what the plan outputs, see the PMS Report for Class I devices under Article 85 and PSUR for Class IIa, IIb, and III devices under MDR.

A PMS plan that lives in isolation from the risk management file and the clinical evaluation is a PMS plan that will fail the loop test at audit. Two links must be explicit in the plan itself.

Link to the risk management file. EN ISO 14971:2019 + A11:2021 establishes risk management as a lifecycle activity. The PMS plan names the feedback loop into the risk file: which findings trigger a risk review, who performs the review, how the risk estimates are updated, how new control measures are evaluated, and how the updated risk file flows back into the design, the IFU, or the labelling. Element B (assessment methods) is where this link lives. If a trending signal crosses a threshold, the plan says what happens in the risk file.

This is not decorative. The arm-strap story is this loop running correctly. A skin-irritation signal surfaced through Element A (complaints), was assessed through Element B (classification against the hazard analysis), triggered a risk-file update under EN ISO 14971:2019 + A11:2021, produced a new control measure, and flowed back into the material specification. A PMS plan that does not describe this path is a PMS plan that will catch the signal and then drop it.

Link to the clinical evaluation. Annex XIV Part B requires PMCF data to feed the clinical evaluation update cycle. The PMS plan either contains the PMCF plan or references it by name and version. Element I is the anchor. When PMCF data arrives, the plan says how the clinical evaluation is updated, who owns the update, and what the cadence is.

Both links must be traceable on the page. An auditor who reads the PMS plan and cannot find the sentence that names the risk-file loop will ask where it is. "It is implicit in our QMS" is not an answer that survives the question.

Step 6. Write the trend-reporting protocol under Article 88 explicitly

Element D of Annex III points directly at Article 88, which governs trend reporting. Article 88 requires manufacturers to report any statistically significant increase in the frequency or severity of incidents that are not serious incidents or that are expected undesirable side-effects. The PMS plan must state how that significance is identified.

The three questions the plan must answer on Element D are: what counts as an incident in scope of the trend-reporting rule, what is the observation period over which trends are measured, and what is the statistical rule that defines a significant increase. The Regulation does not mandate a specific statistical method, but the plan must name one. "We will review for trends periodically" fails Element D. "We will compute a running rate per 1000 devices per 90-day window and flag any increase exceeding X per cent against the historical baseline for review by the risk team" is the shape of a compliant answer.

For the full article-level mechanics of trend reporting, see trend reporting under MDR Article 88.

Step 7. Write the PMCF plan or the justification. One of the two must exist

Element I of Annex III is the element startups most often fail to handle cleanly. The plan must include a PMCF plan (the one specified in Annex XIV Part B), or it must include a documented justification for why PMCF is not applicable.

There are only three acceptable states: (1) the PMS plan contains a PMCF plan directly, (2) the PMS plan references a separate PMCF plan document by name and version, or (3) the PMS plan contains a written justification for why PMCF is not applicable, reasoned against the characteristics of the device and the nature of the clinical evaluation. Anything else fails Element I.

"We will decide later" is not a valid state. "The notified body said we do not need one" is not a valid state unless the reasoning is also written into the plan. "Our device is Class I so PMCF does not apply" is not automatically valid. Class I devices can still require PMCF depending on the clinical claims and the state of the art. For the PMCF specifics, see PMCF under MDR. A guide for startups.

Common drafting mistakes that create findings

A handful of drafting mistakes come up repeatedly at early-stage startups. Each one is directly traceable to a specific Annex III element.

Mistake 1. Reactive-only data sources. The plan names complaints and incidents but no literature monitoring, no similar-device review, and no PMCF. This fails Element A and the Article 83(2) "actively and systematically" test.

Mistake 2. Unowned activities. Every row of the plan should name an owner. Plans that say "the team will review" without a named role fail at the first audit question.

Mistake 3. No cadence. "Regularly" and "periodically" are not cadences. Every activity needs a frequency the team can be measured against.

Mistake 4. Missing trend-reporting statistical rule. Element D requires an explicit method for identifying statistically significant increases. Most first-draft plans leave this blank or wave at it.

Mistake 5. No PMCF justification in writing. The team believes PMCF is not applicable but never writes the reasoning down. Element I requires the justification to exist in the plan, not in someone's head.

Mistake 6. No link to the risk file. The plan describes data collection and analysis but does not describe what happens to the risk file when findings arrive. The feedback loop under Article 83(3) is implied rather than documented.

Mistake 7. Template language that does not match the device. The plan describes a Class III PMCF architecture for a Class I reusable instrument, or vice versa. Copy-paste from a consultancy template without calibrating to the device almost always produces this mismatch.

Mistake 8. The plan never names MDCG 2025-10. MDCG 2025-10 (December 2025) is the current operational guidance. Plans that still reference older guidance or no guidance at all signal to the auditor that the author has not updated the document since the new MDCG was published.

The Subtract to Ship angle

The Subtract to Ship framework for MDR applied to the PMS plan produces one test. For every paragraph in the plan, name the specific Annex III element, Article, or MDCG 2025-10 section it addresses. Every paragraph that cannot be traced is waste. Every Annex III element that cannot be located is a gap.

Run this test on a first-draft plan from a consultancy template and you will typically cut 30 to 50 per cent of the volume without losing a single regulatory obligation. The plan that survives is shorter, more executable, and easier to keep current. The cuts are not savings on quality. They are savings on maintenance cost that would otherwise erode the plan every quarter as the team abandons paragraphs nobody owns.

What survives is a plan where every row of the table maps to an Annex III element, every activity has an owner and a cadence, every proactive source is named alongside the reactive ones, the trend-reporting rule is explicit, the risk-file and clinical-evaluation loops are drawn on the page, and Element I is handled either with a PMCF plan or a written justification. Everything else is subtraction bait.

Reality Check. Where do you stand?

  1. Open your PMS plan. For each of the nine elements A through I in the walkthrough above, can you point to the paragraph or row that addresses it? If any element is missing, that is a finding waiting to happen.
  2. Is the plan structured as a table or as prose? If prose, run the paragraphs against the Annex III elements and see what happens. Most prose plans have silent gaps.
  3. Count the data sources named in the plan. How many are proactive and how many are reactive? If fewer than a third are proactive, Article 83(2) is not satisfied.
  4. For Element D, does the plan name a specific statistical rule and observation period for trend reporting, or does it wave at "regular trending"?
  5. For Element I, does the plan contain either a PMCF plan or a written justification for why PMCF is not applicable? "We will decide later" is not one of the options.
  6. Does the plan link explicitly to the risk management file and describe what happens when findings arrive? If the feedback loop is implicit, it is not documented.
  7. Has the plan been updated since MDCG 2025-10 was published in December 2025? Plans that do not reference the current guidance signal that they are behind.
  8. Does every row of the plan have a named owner, a specific cadence, and a document reference to the SOP or record that implements it?
  9. If an auditor opened the plan tomorrow and asked "trace one finding from this cycle through the plan, the risk file, and into a design update," could you do it on the page?

Frequently Asked Questions

Does MDR Annex III require a specific PMS plan template?

No. Annex III, Section 1.1 of Regulation (EU) 2017/745 specifies the content the PMS plan must address, not the format. Manufacturers are free to structure the plan as prose, as a table, or as a hybrid. What matters is that every listed element is addressed. A table format with one row per Annex III element is the leanest structure we see survive audits at startups.

Is a PMS plan required for Class I devices?

Yes. MDR Article 84 requires a PMS plan for every class of device. The content requirements in Annex III apply to every class. What changes by class is the proportionality. The cadence, the depth of data sources, and the analytical rigour scale with the risk class. The existence of the plan does not.

Can the PMS plan and the PMCF plan be the same document?

Yes. Annex III, Section 1.1, explicitly allows the PMCF plan to sit inside the PMS plan. The PMCF plan can also be a separate document referenced from the PMS plan by name and version. Either structure is acceptable as long as the PMCF content specified in Annex XIV Part B is present somewhere traceable from the PMS plan.

How often does the PMS plan itself need to be updated?

The Regulation does not set a calendar cadence for updating the plan itself. The plan is updated whenever the PMS system changes. New data sources, new assessment methods, new trigger criteria. Or whenever PMS findings reveal that the existing plan does not adequately cover the real-world risks. In practice, startups review the plan annually as part of the QMS management review cycle, with interim updates driven by findings.

What is the minimum length of a compliant PMS plan?

There is no minimum length in the Regulation. The shortest compliant plans we have seen at startups are three to five pages, organised as a table with one row per Annex III element. The volume is not the compliance signal. The coverage of every element is. A three-page plan that addresses all nine elements beats a twenty-page plan that skips two.

Does the PMS plan need to reference MDCG 2025-10?

Referencing MDCG 2025-10 is not legally required. The legal obligations live in the MDR itself. But MDCG 2025-10 (December 2025) is the current operational guidance that notified bodies apply when assessing PMS systems. Plans that align with the guidance have an easier audit. Plans that reference the guidance explicitly signal to the auditor that the author has worked with the current interpretation.

Who should own the PMS plan inside a small startup?

The PMS plan sits inside the QMS, so formal ownership typically lives with the Person Responsible for Regulatory Compliance (PRRC) or the quality lead. Operational responsibility for specific rows can be distributed. For example, the clinical lead owns the PMCF row, the engineering lead owns the complaint-investigation row, the regulatory lead owns the trend-reporting row. The plan names the owners row by row so the distribution is explicit.

Sources

  1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 83 (post-market surveillance system of the manufacturer), Article 84 (post-market surveillance plan), Article 88 (trend reporting), Annex III (technical documentation on post-market surveillance, Section 1.1), and Annex XIV Part B (post-market clinical follow-up). Official Journal L 117, 5.5.2017.
  2. MDCG 2025-10. Guidance on post-market surveillance of medical devices and in vitro diagnostic medical devices. Medical Device Coordination Group, December 2025.
  3. EN ISO 14971:2019 + A11:2021. Medical devices. Application of risk management to medical devices.

This post is a deep dive in the Post-Market Surveillance & Vigilance series of the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. A PMS plan that addresses every element of Annex III element by element is the document that decides whether a PMS system catches a real-world signal or misses it. And the document an auditor will read first.