MDR Annex III, Section 1.1 lists the PMS data sources every manufacturer must consider: complaints from healthcare professionals, patients, and users; information from similar devices on the market; publicly available information and state-of-the-art evaluations; and data on serious incidents, including PSURs and field safety corrective actions. For a startup with a small installed base, the job is not to collect more data than you have — it is to name every source the Regulation expects, run the proactive ones on a documented cadence, and prove to an auditor that the absence of data is the result of a small footprint, not an absent system.

By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.


TL;DR

  • Annex III, Section 1.1 of Regulation (EU) 2017/745 fixes the categories of PMS data sources a manufacturer must consider. The list is not optional.
  • Internal sources include complaints, CAPA records, service and return data, training records, and sales and usage data. These are the sources a startup controls directly.
  • External sources include scientific literature, similar-device monitoring, competent authority databases, registries where available, and field safety corrective actions on comparable devices.
  • Proactive sources (literature, similar-device review, PMCF) must be run on a documented cadence regardless of whether data arrives. Reactive sources (complaints, incidents, returns) are run when events trigger them.
  • A startup with fifty devices in the field is not excused from naming every source — it is expected to document why the volume is low and to run the proactive sources anyway.
  • MDCG 2025-10 (December 2025) is the current operational guidance on how PMS data collection should run in practice.

Why the data-source list is the part startups get wrong first

When a first-time founder sits down to draft a PMS plan, the instinct is to name complaints and stop. Complaints feel like post-market data. Complaints come in through a channel the team controls. Complaints have a timestamp and a name and a story. Everything else — literature searches, similar-device reviews, registry data — feels like work for a large manufacturer with a regulatory department and budget to burn.

The arm-strap sleep-monitoring device from the PMS pillar post is the story that breaks this instinct. The skin-irritation pattern was caught because the complaint channel was running — that part is true. But the same pattern could have been anticipated earlier if the similar-device monitoring had flagged that textile-polymer interfaces on long-wear sensors were already producing complaints across the category. Post-market signal is not only what your own users say. Post-market signal is also what the rest of the market has already learned. A PMS system that only looks at its own complaint inbox is a system that relearns every lesson the hard way.

This post walks the Annex III data-source categories and translates each one into what it actually looks like for a startup with a small installed base. It assumes you have read what post-market surveillance is under MDR and the PMS plan under MDR Annex III. Those two posts establish the framework this one fills in.

The Annex III data-source list — what the Regulation names

Annex III, Section 1.1 of Regulation (EU) 2017/745 specifies the information the PMS plan must process. In paraphrased form, the categories are: complaints and reports from healthcare professionals, patients, and users on their experience with the device; information concerning similar devices on the market; publicly available information about similar devices and state-of-the-art evaluations; and data on serious incidents, including information from Periodic Safety Update Reports and field safety corrective actions.

Four categories named in one sentence. Each one expands into a different practical workstream. The confusion startups run into is that the four categories overlap — a field safety corrective action on a competitor device is both "similar devices" and "publicly available information" — and they leave out sources that are obvious once you think about them, like internal service records and sales data, because those sources are assumed to be part of the QMS already.

MDCG 2025-10 (December 2025) is the guidance document that expands the list into operational practice. The notified body auditor assessing your plan is reading both the Regulation and the guidance side by side. A plan that names only complaints is a plan that fails against both.

Internal sources — complaints, CAPA, service, sales, and training

Internal sources are the ones a startup controls directly. They are generated by the company's own operations and sit inside the QMS. These are the easiest sources to run cleanly and the most common place where startup plans are reasonably well-built.

Complaints. The complaint channel is Element A of Annex III in its most visible form. It requires a named intake route — email address, form, phone number, or all three — that everyone in the company, every distributor, and every end user can use. It requires a logging system with timestamp, complainant, device identification, description, and owner. It requires an assessment workflow that classifies each complaint against the pre-market hazard analysis and decides whether the event rises to a reportable incident under Articles 87 to 92. For the complaint workflow itself, see complaint handling under MDR for startups.

CAPA records. Corrective and preventive action records are a post-market data source in their own right. Every CAPA captures a problem, a root cause, and an action. Trending CAPAs by category and root cause produces signals that no single complaint would surface. The PMS plan names CAPA data as a source and sets a cadence for reviewing CAPA trends against the risk file.

Service and return data. For devices that are serviced or returned, the service and return stream is a rich post-market signal. What breaks, after how long, under what conditions, with what failure mode — this data lives in the service ledger and the returns log. The plan names these streams and sets a cadence for review. A device that never reaches the service bench still generates a signal: the absence of service events is itself data, and the plan should capture that.

Sales and usage volume. Annex III and the PSUR requirements under Article 86 both expect manufacturers to know the volume of devices in the field and, where practicable, the usage frequency. Sales data is the denominator under every trend-reporting calculation — a complaint rate without a device count is meaningless. For a lean startup, this is one spreadsheet column away from being a usable source.

Training records and user feedback from training. If the device is accompanied by user training, the training sessions produce feedback that is a post-market data source. What users struggle with in training is often a leading indicator of what they will complain about in use. The plan can name training feedback as a source without adding significant work.

Five internal sources. Each one traces to an Element of Annex III Section 1.1 or to the broader Article 83 obligation to actively and systematically gather data.

External sources — literature, registries, competent authority data

External sources are the ones that exist outside the company. They are the hardest to run at startup scale because they require actively looking — the data does not arrive, it must be pulled.

Scientific literature monitoring. Annex III names "publicly available information" and "state-of-the-art evaluations" explicitly. In practice, this means a scheduled literature search against a defined search string in a defined set of databases, with results screened and relevant hits logged. PubMed is the starting database. Google Scholar is a supplement. For a startup with a narrow device, the search string is specific and the hit volume is manageable. The plan names the search string, the databases, the cadence (quarterly is common at startup scale), and the owner. The search is documented even when it produces zero hits — the zero-hit log is itself evidence that the source is running.

Similar-device monitoring. This is the source most startups skip and notified bodies most often flag. The plan names a set of devices that are similar to yours — same intended purpose, same technology family, or same clinical use — and monitors public information about those devices on a defined cadence. Sources include manufacturer recall announcements, notified body safety notices, MedSun for US data, and open databases for European data as they come online through Eudamed. The point is not to audit the competition. The point is to learn what signals the rest of the market is already seeing.

Competent authority data. Competent authorities publish safety information, including FSCAs and recall notices for devices on the European market. The plan names the competent authority sources it monitors — in practice, the national agencies of the member states where the device is placed — and sets a cadence for review.

Registry data. For some device categories, clinical registries exist and publish data on device performance. If a registry exists that covers your device category, the plan names it and sets a review cadence. If no registry exists, the plan states that and moves on — the absence of a registry is a valid data-source finding.

PSURs from comparable devices. Where manufacturers publish PSUR conclusions or summaries, those conclusions are a post-market data source for similar devices. This is a newer source and the availability varies by category. The plan names it when it is available.

A lean startup will typically run two to four of these external sources actively — literature plus similar-device monitoring plus competent authority data, at minimum. A plan that runs zero external sources is a plan that fails the Article 83(2) "actively and systematically" test.

Social media and forums — what to watch, what to ignore

Social media and user forums are a legitimate PMS data source when the device has user communities that discuss it online. The category is not in Annex III explicitly, but it fits inside "information from similar devices" and "publicly available information" — and MDCG 2025-10 acknowledges the relevance of online user-generated content where it is meaningful for the device.

The practical guidance at startup scale is narrow. Social media monitoring is worth running when three conditions hold: there is an active user community discussing the device category, the community is searchable, and the content produced is substantive enough to extract signal from noise. For a consumer-facing wearable or a home-use device, this is often true. For a deep-tissue surgical instrument, it usually is not.

When social media is a named source in the plan, the plan specifies the platforms, the search terms, the cadence, and the owner. It also specifies what is in scope — clinical claims, adverse experiences, usability observations — and what is out of scope, so the owner is not drowning in irrelevant content. And it specifies the documentation standard: hits reviewed, relevance logged, actionable items escalated.

The trap is running social media as a token activity. Naming "Twitter and Reddit" in the plan without a search strategy, a cadence, or an output is worse than not naming social media at all — it is a row the auditor will ask about and find empty.

User feedback as a structured collection activity

Beyond complaints that arrive unsolicited, the Regulation expects manufacturers to actively solicit feedback from users. This is the proactive side of the user-feedback stream, and it is distinct from the complaint channel.

Structured user feedback takes several forms. Post-sale surveys are the simplest — a questionnaire sent to users a defined interval after acquisition. Structured interviews with key users are richer but lower volume. Training session feedback forms, as mentioned earlier, are a low-cost option. For devices with regular service or calibration cycles, the service touch-point is a natural place to collect structured feedback. For Class IIa devices and above, Annex XIV Part B may require this kind of active collection as part of PMCF.

The plan names the structured feedback activity, the cadence, the owner, and the analysis method. The volume does not need to be large at startup scale. What the auditor looks for is whether the activity is running and whether the findings are flowing into the risk file, the clinical evaluation, and the next PMS Report or PSUR.

What is feasible at startup scale

A startup with fifty devices in the field does not generate the data volume of a manufacturer with fifty thousand. That is a fact about the installed base, not a fact about the PMS system. The Regulation does not reduce the list of data sources when the installed base is small — it expects proportionality in depth and cadence, not in coverage.

The feasible startup posture looks like this. Every Annex III category is named in the plan. Every internal source is running by default because the QMS is running. Two to four external sources are running on a documented cadence with specific search terms, databases, and owners. Structured user feedback is running on a cadence appropriate to the volume — annual for a small installed base is defensible. Social media monitoring is included only when the user community actually exists. Every activity has an owner, a cadence, and a document reference. The plan states explicitly that the data volume is proportionate to the installed base and that the absence of data in a category is documented, not assumed.

What is not feasible is running every possible source at maximum depth. Cutting the depth is legitimate. Cutting the coverage is not. This is where the Subtract to Ship framework for MDR applies directly: every data source in the plan traces to an Annex III category, and every data source that does not trace comes out. What remains is the smallest set that covers every category the Regulation expects.

For the output side — how this data flows into the class-specific reports — see the PMS Report for Class I devices under Article 85 and PSUR for Class IIa, IIb, and III devices under MDR.

Common mistakes

Mistake 1 — Complaint-only data source list. The plan names complaints and stops. Annex III names four categories and the QMS implicitly adds several more. A complaint-only list fails the Article 83(2) test.

Mistake 2 — Naming literature monitoring without a search string. The plan says "we will monitor scientific literature" and provides no search terms, no databases, no cadence. The auditor asks for the last search log and there is nothing to show.

Mistake 3 — Treating zero-hit logs as failures. A literature search that finds nothing is a successful search. The plan should require the zero-hit log to be saved exactly like a positive-hit log. Discarding the zero-hit log makes the activity invisible at audit.

Mistake 4 — Skipping similar-device monitoring. Startups assume they have no competitors or that their device is unique enough to make similar-device review meaningless. The category is almost never empty in practice, and the Regulation names it explicitly.

Mistake 5 — Using installed base size as an excuse for zero data. "We only have fifty devices in the field, so PMS cannot run" is not a defensible position. Proportionality reduces depth, not coverage.

Mistake 6 — Social media as a token row. Naming social media without a strategy produces an auditable row with nothing behind it. Either run it properly or leave it out.

Mistake 7 — No feedback loop from data sources into the risk file and clinical evaluation. Sources are named but the path from finding to update is missing. Data collection without analysis is not PMS.

The Subtract to Ship angle

The Subtract to Ship framework applied to data sources produces two tests. First, does every data source in the plan trace to a specific Annex III category or to the Article 83(2) obligation to gather data actively and systematically? If not, cut it. Second, is every Annex III category covered by at least one source in the plan? If not, add one.

The outcome is a data-source inventory that is small, traceable, and complete. Small because every source earned its place. Traceable because every source names a specific obligation. Complete because every category the Regulation expects has at least one entry. This is the posture that survives a notified body audit and that does not collapse when the team is too small to maintain a thirty-source list.

Reality Check — where do you stand?

  1. List every data source currently named in your PMS plan. Does the list cover all four Annex III Section 1.1 categories explicitly?
  2. For each internal source, can you point to the SOP or record that implements it? Complaints, CAPA, service, sales, training — all traceable, or some implicit?
  3. For external sources, have you named at least literature monitoring, similar-device monitoring, and competent authority monitoring? If any of these is missing, what is the documented justification?
  4. For literature monitoring, is there a specific search string, a defined list of databases, and a documented cadence? Or is the row empty past the label?
  5. When was the last literature search actually executed and logged, including zero-hit logs?
  6. For similar-device monitoring, which devices are in scope and how often are they reviewed? Can you show the last review record?
  7. Is structured user feedback collected as a proactive activity, separate from unsolicited complaints?
  8. If social media is in the plan, is there a search strategy and an owner, or is it a token row?
  9. For each data source, can you trace a path from "signal arrives" to "risk file and clinical evaluation are updated"?
  10. If your installed base is small, does the plan state the volume explicitly and explain that low data volume is a consequence of footprint, not absence of system?

Frequently Asked Questions

What are the mandatory PMS data sources under MDR?

Annex III, Section 1.1 of Regulation (EU) 2017/745 requires manufacturers to consider complaints and reports from healthcare professionals, patients, and users; information concerning similar devices on the market; publicly available information about similar devices and state-of-the-art evaluations; and data on serious incidents, including information from PSURs and field safety corrective actions. MDCG 2025-10 (December 2025) expands these categories into operational practice. Every category must be addressed in the PMS plan.

Is a startup with a small installed base allowed to skip literature monitoring?

No. The Article 83(2) obligation to actively and systematically gather data applies regardless of installed base. Literature monitoring is a proactive source that must run on a documented cadence even when the startup's own device has zero complaints. A small installed base reduces the depth and frequency the Regulation expects, but it does not reduce the coverage.

Does social media count as a valid PMS data source?

Yes, when the device has an active user community discussing it online and the monitoring is run with a defined strategy, search terms, cadence, and owner. MDCG 2025-10 acknowledges online user-generated content as a relevant source where meaningful. For devices without an active online community, naming social media as a source adds no value and may create an auditable row with no content behind it.

How often should literature searches be executed?

The Regulation does not set a mandatory cadence. In practice, quarterly searches are common for startups with narrow, specific search strings. Higher-risk devices and broader categories may require monthly or continuous monitoring. The cadence must be stated in the plan, run on schedule, and logged — including zero-hit searches.

What is the difference between complaint data and vigilance data in the PMS plan?

Complaint data is every report received through the complaint channel, regardless of severity. Vigilance data is the subset of complaints and incidents that meet the serious-incident thresholds under Articles 87 to 92 and trigger reporting to the competent authority. The PMS system collects both — complaints as a general data source and serious incidents as a specific reportable subset. The plan names both and the escalation rule that connects them.

Are sales and usage data required as a PMS source?

For Class IIa, IIb, and III devices, Article 86 explicitly requires the PSUR to include the volume of sales and an estimate of the size and other characteristics of the population using the device. That means sales and usage data must be a source in the plan. For Class I devices, the PMS Report under Article 85 is less prescriptive, but trend reporting under Article 88 still depends on a denominator — which means sales data is effectively a required source for any device with non-trivial volume.

Does the PMS plan need to list specific databases and search terms?

Yes for every source that depends on active searching. Literature monitoring, similar-device monitoring, and competent authority data all require specific databases, search terms or device identifiers, cadence, and owners. Generic language like "we will monitor relevant sources" fails the auditability test. The plan states the sources specifically enough that a new team member could run the search on day one.

Sources

  1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 83 (post-market surveillance system of the manufacturer), Article 84 (post-market surveillance plan), and Annex III (technical documentation on post-market surveillance, Section 1.1). Official Journal L 117, 5.5.2017.
  2. MDCG 2025-10 — Guidance on post-market surveillance of medical devices and in vitro diagnostic medical devices. Medical Device Coordination Group, December 2025.
  3. EN ISO 14971:2019 + A11:2021 — Medical devices — Application of risk management to medical devices.

This post is a deep dive in the Post-Market Surveillance & Vigilance series of the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. A PMS data-source inventory that covers every Annex III category on a documented cadence is the difference between a PMS system that catches real-world signals and one that waits for a complaint to arrive at the door.