PMCF surveys and registries are two of the lowest-cost methods a startup can use to satisfy the post-market clinical follow-up obligations of MDR Annex XIV Part B. A structured user or clinician survey asks pre-specified questions about clinical performance, adverse events, usability, and off-label use of a CE-marked device, and produces qualitative and semi-quantitative clinical signals that feed the PMCF evaluation report. A registry systematically collects pre-defined data fields on the device or the condition it treats, across many centres, over long time periods, and produces longitudinal clinical data strong enough to support CER updates under Article 61(11). Each fits a different question. Neither is a substitute for a prospective study when the research question demands one, but together they cover a large share of the PMCF questions a resource-constrained startup actually faces.
By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.
TL;DR
- Structured user and clinician surveys are a feasible PMCF method for startups. Annex XIV Part B explicitly names "feedback from users" as a general PMCF method, and a disciplined survey programme can produce audit-ready clinical signals on a small budget.
- Registries are systematic, longitudinal data collection structures. Annex XIV Part B explicitly names "evaluation of suitable registers" as a specific PMCF method, and for some device categories joining an existing registry is the cheapest path to strong clinical data.
- Surveys are strong for qualitative performance, usability, and off-label-use signals. They are weak for rare events, quantitative effect sizes, and longitudinal durability questions. Match the method to the question.
- Registries are strong for long-term performance, rare events, and device-lifetime questions. They are weak for rapid feedback, usability signals, and questions that require pre-specified endpoints the registry does not collect.
- Data quality is the discriminator at audit. A survey with 9 percent response rate and no analysis plan is not PMCF evidence. A registry contribution with no data-quality controls is not PMCF evidence either. The design decisions in this post are what move both methods from paper exercises to defensible evidence.
- MDCG 2025-10 (December 2025) is the current operational guidance on how PMCF sits inside the PMS system, and it is the document to read alongside Annex XIV Part B when designing either method.
Where this post sits in the PMCF cluster
Post 181 is the pillar on post-market clinical follow-up under MDR. Post 182 walks through how to write the PMCF plan under Annex XIV Part B. Post 184 covers prospective PMCF study design. This post narrows to the two methods that most startups reach for when a prospective study is out of budget: structured surveys and registries. The framing is practical. When each one fits, how to design each one well enough to survive audit, and how to integrate both into the PMCF evaluation report that feeds the clinical evaluation update cycle under Article 61(11).
When surveys fit
A structured PMCF survey is a disciplined questionnaire delivered to a defined population of users. Clinicians, patients, or both. With pre-specified questions that map to specific Annex XIV Part B objectives. The survey is not a marketing instrument. It is not a satisfaction poll. It is a clinical data collection activity with a written protocol, a defined sample frame, a fixed schedule, and an analysis plan written before the data arrives.
Surveys fit when the research question is qualitative or semi-quantitative and the signal of interest is something a clinician or patient can report from experience. They are useful for confirming that the device performs as expected in real-world use, for detecting patterns of off-label or unintended use, for surfacing usability problems that did not show up in pre-market evaluation, and for catching early signals of adverse effects that are visible to users but not captured by complaint channels.
Surveys do not fit when the research question is about rare events, precise effect sizes, long-term durability, or endpoints that require instrument-grade measurement. A survey cannot detect a one-in-ten-thousand adverse event at the scale most startups can deploy, and a survey asking clinicians to estimate a physiological parameter produces recall-biased noise. Match the method to the question, not the question to the method.
The Subtract to Ship rule applies. If the open clinical question from the CER can genuinely be answered by a survey, build the survey. If it cannot, a survey running alongside a better-fitted method is still useful for the qualitative dimensions, but it is not a substitute.
Designing a structured user survey
A PMCF survey that will survive notified body review has seven design elements. Each one is a decision that has to be made before the first question goes out, and each one belongs in the survey protocol referenced from the PMCF plan.
Research question. State in one sentence what the survey is designed to answer, and tie the sentence to one or more Annex XIV Part B objectives. "Does the device continue to deliver the pre-specified clinical performance in routine use, and are clinicians observing any off-label patterns?" is a research question. "Collect user feedback" is not.
Population and sample frame. Define who is eligible to respond. For a clinician survey, this is typically users in sites where the device is deployed, with a minimum usage threshold. For a patient survey, this is patients who have used the device for a defined minimum duration and consented to contact. The sample frame is the list of people from which respondents are drawn. A sample frame with 30 clinicians and a target response of 20 is defensible. A sample frame that cannot be described is not.
Question design. Questions should be closed where possible. Rating scales, yes/no, frequency categories. Because closed questions are analysable. Open-text questions have a role for capturing unanticipated signals, but they are slow to analyse and cannot support quantitative statements. Every question should trace to one of the Annex XIV Part B objectives or to a specific clinical claim in the CER. Questions that do not trace to either should be cut.
Delivery and response collection. Specify how the survey reaches respondents. Email, in-site paper form, secure web portal. And what the expected response window is. Document the reminder schedule. Document the data handling process under GDPR, including whether the responses are pseudonymised or anonymised, and who has access.
Response-rate target and interpretation rules. Set a minimum response rate below which the results will be flagged as non-representative. A 9 percent response rate on a clinician survey does not support confident clinical conclusions and should not be reported in the PMCF evaluation report as if it did. The protocol says in advance what will be done if the response rate misses the target: extend, reissue, or report as qualitative signal only.
Analysis plan. Write the analysis plan before the data arrives. Specify which questions will be tabulated, which statistics will be computed, how subgroup analyses will be handled, and how open-text responses will be coded. Decision rules that are invented after the data arrives are not credible.
Acceptance criteria. State in advance what kinds of findings will trigger a CER update, a risk file update, a labelling change, or a corrective action. Pre-specified criteria are the difference between a survey that closes the PMCF loop and a survey that produces a narrative nobody acts on.
When registries fit
A registry is a structured, ongoing data collection system that captures pre-defined fields on devices, patients, or conditions across many centres over long periods. Annex XIV Part B explicitly names the evaluation of suitable registers as a specific PMCF method, and for some device categories. Orthopaedic implants, cardiovascular devices, long-term monitoring devices. Participation in an established registry is the cheapest and strongest way to satisfy a large share of the PMCF objectives.
Registries fit when the research question is longitudinal. If the question is "how does the device perform at 3, 5, and 10 years of real-world use?" a registry is almost certainly the right instrument. Registries also fit when the question concerns rare events that no startup could detect in its own cohort. The pooled sample across many centres makes the detection statistically possible. And registries fit when the device category is one where registry participation is already the regulatory expectation; for some implantables, notified bodies expect registry data as a baseline.
Registries do not fit when the research question is about acute performance, usability, or signals that emerge within weeks of use. They also do not fit when no suitable registry exists and the manufacturer does not have the resources to create one. The economics of building a registry from scratch are outside the reach of most startups and belong, if at all, in later phases of company growth.
Joining versus creating a registry
For startups, the practical question is almost always joining an existing registry, not creating one. Creating a registry requires governance, a scientific board, data standards, a hosting platform, ethics approvals across participating sites, long-term funding, and a sustained organisational commitment measured in years. These conditions rarely exist in a five-person company running on a seed round.
Joining an existing registry means identifying registries relevant to the device category, evaluating whether the registry collects the fields needed to answer the PMCF questions, negotiating access terms with the registry operator, and integrating the registry data flow into the PMCF evaluation process. The cost is a participation fee plus the internal effort of preparing data submissions in the registry's format and consuming the aggregated outputs for the PMCF evaluation report.
Evaluating a candidate registry means asking specific questions. Does the registry cover the device, the condition, or the patient population of interest? Does it collect the endpoints that map to the open clinical questions in the CER? Is the data quality documented, with audit trails and completeness reporting? What is the expected follow-up duration for patients in the registry? What are the data-access terms. Does the manufacturer receive patient-level data, aggregated summaries, or reports on demand? How are adverse events captured and linked to devices? Is the registry recognised by notified bodies in the jurisdictions where the device is marketed?
A registry that cannot answer these questions with documentation is not a PMCF-grade source. Joining a weak registry and citing it in the PMCF plan is worse than not citing a registry at all, because the gap between the claim and the reality is visible at the first surveillance audit.
The data quality bar. What auditors actually look for
Data quality is where PMCF surveys and registries are judged at audit. Both methods can produce defensible clinical evidence or useless noise, depending on the discipline of the design.
For surveys, the data quality bar has five elements. A defined sample frame. A minimum response rate threshold that was set in advance and is actually met. A pre-specified analysis plan. Documented handling of missing data and partial responses. An audit trail linking raw responses to the analysed dataset and the conclusions in the PMCF evaluation report. Surveys that fail on any of these elements are flagged at audit as weak evidence, and conclusions built on them in the CER will be challenged.
For registries, the data quality bar has four elements. Documented data quality controls at the registry level. Completeness reporting, audit mechanisms, validation rules. Documented linkage between the manufacturer's device and the registry records, so the data actually corresponds to the device under evaluation. Follow-up duration matched to the clinical questions. A registry that only captures six-month outcomes cannot answer a ten-year durability question. And documented contribution of the registry data to the PMCF evaluation report, with the specific fields used, the period analysed, and the conclusions drawn.
Where either method qualifies as a clinical investigation under MDR. For instance, a survey-based study with additional procedures, or a registry-embedded study with prospective protocol-defined data collection. EN ISO 14155:2020 + A11:2024 applies for the aspects that apply, and the data quality bar rises accordingly. See PMCF study design: prospective studies for post-market data collection for the full study-grade treatment.
Integration into the PMCF report
Surveys and registries do not produce PMCF evidence by existing. They produce PMCF evidence by being analysed and integrated into the PMCF evaluation report that closes the loop under Article 61(11).
For a survey, integration means summarising the protocol, the response rate achieved, the analysis performed, the findings against the pre-specified questions, and the conclusions against the acceptance criteria. The PMCF evaluation report states whether the survey findings confirm the clinical performance claims in the CER, whether any previously unknown signals emerged, whether the findings trigger any updates, and what the manufacturer has done in response. A survey that runs and is not integrated into the report has not produced PMCF evidence. It has produced data in a drawer.
For a registry, integration means summarising the registry's relevance to the device, the period of data used, the fields analysed, the findings, and the conclusions. Where the registry produces periodic reports, those reports are cited and their relevant content extracted into the PMCF evaluation report narrative. Where the registry provides patient-level data, the analysis performed by the manufacturer is described and the results reported.
In both cases, the PMCF evaluation report feeds the CER update, the risk file update if applicable, and the PSUR for Class IIa and above. For the cadence and mechanics, see post 181 on the PMCF pillar and post 187 on the PMCF evaluation report.
Common mistakes
Six mistakes recur across startup PMCF survey and registry programmes. Avoid them in the design phase and the execution burden drops.
Running a survey with no response-rate target. Without a pre-specified target, any response rate becomes acceptable by default, and the resulting evidence is indefensible. Set the target in the protocol.
Writing survey questions that do not trace to Annex XIV Part B. Every question should tie to a specific objective or to a specific CER claim. Questions that do not trace are filler and should be cut.
Claiming registry participation without documentation. A PMCF plan that names a registry without evidence of a participation agreement, a data flow, or a contribution record will be challenged at audit. Document the relationship.
Joining a registry that does not collect the right fields. A registry that does not capture the endpoints the PMCF questions require cannot answer those questions regardless of its size or reputation. Evaluate fit before committing.
Skipping the analysis plan on surveys. An analysis plan invented after the data arrives is not credible. Write it before the data arrives.
Treating registry output as sufficient without manufacturer-side analysis. Registry reports are inputs, not substitutes, for the PMCF evaluation report. The manufacturer must analyse the registry data against the PMCF plan objectives and the CER claims, not simply cite the registry's own conclusions.
The Subtract to Ship angle. Lean low-cost PMCF that closes the loop
The Subtract to Ship framework for MDR applied to surveys and registries produces a short rule. For each survey question and for each registry field, trace it to a specific Annex XIV Part B objective and to a specific CER claim or risk file entry. If the trace does not exist, cut the question or drop the field from the analysis. For each open clinical question the PMCF plan must answer, choose the lightest method. Survey, registry, literature surveillance, or a combination. That can credibly answer it.
A lean low-cost PMCF programme for a Class IIa startup device might combine a structured quarterly clinician survey with a pre-specified protocol and response-rate target, participation in a disease registry with documented data flow and annual contribution, structured literature surveillance on a defined cadence, and an annual PMCF evaluation report that integrates all three and feeds the CER update. That programme is runnable on a small budget. Every element traces to Annex XIV Part B. The file that survives is the file that runs.
Reality Check. Where do you stand?
- For each survey in your PMCF plan, can you name the research question and the specific Annex XIV Part B objective each question addresses?
- Does the survey protocol state a response-rate target and a decision rule for what happens if the target is missed?
- Is the survey analysis plan written before the data arrives, or will it be improvised afterwards?
- For each registry in your PMCF plan, do you have a documented participation agreement and a defined data flow?
- Does the registry collect the endpoints your PMCF questions require, with follow-up duration matched to the clinical questions?
- Is there a written linkage between the manufacturer's device and the registry records, so the data corresponds to the device under evaluation?
- Does your PMCF evaluation report integrate survey and registry findings against the pre-specified acceptance criteria, or simply restate what the data shows?
- When a survey finding or a registry signal triggers an action, is there a documented path into the CER, the risk file, and the PSUR?
Frequently Asked Questions
Can a clinician survey alone satisfy PMCF obligations for a Class IIa device?
For some low-risk Class IIa devices with narrow clinical claims and well-characterised risk profiles, a structured clinician survey combined with literature surveillance and similar-device monitoring can be sufficient PMCF evidence. The judgement depends on the specific open clinical questions in the CER and the residual uncertainty. For novel devices or devices with broader clinical claims, a survey alone is usually not enough.
Is participation in a non-European registry acceptable as PMCF evidence?
It can be, if the registry captures data relevant to the European device population and the PMCF plan justifies the relevance. Notified bodies will scrutinise the argument that non-European data is representative of the European use context. The registry data has to be evaluated for population, clinical practice patterns, and regulatory context to confirm its usefulness for the PMCF questions.
What response rate is acceptable for a PMCF clinician survey?
There is no fixed threshold in the Regulation. The response rate has to be set in the protocol against the research question and the sample frame. For a small clinician population, a response rate below 50 percent is typically hard to defend as representative; for very large populations, lower rates can be acceptable if the analysis addresses non-response bias. The rule is to set the target in advance and report honestly against it.
Do PMCF surveys require informed consent?
Surveys that collect identifiable patient data require informed consent and compliance with GDPR. Clinician surveys that do not collect patient data usually do not require patient consent, but the clinician's participation is voluntary and should be documented. Where the survey qualifies as a clinical investigation. For example, because it collects patient data linked to device use. EN ISO 14155:2020 + A11:2024 informed consent requirements apply.
Can a manufacturer create its own registry for its own device?
Technically yes, but the evidentiary weight of a single-manufacturer registry is lower than that of an independent multi-manufacturer registry, because the independence and the cross-device comparison are part of what makes registry data valuable. Single-manufacturer data collection is usually better described as post-market data collection or real-world evidence collection, not as a registry, and the PMCF plan should use the accurate term.
How do survey and registry findings feed the PSUR?
The PMCF evaluation report is the document that integrates survey and registry findings against the PMCF plan objectives. For Class IIa, IIb, and III devices, the PMCF evaluation report is one of the inputs that populates the Periodic Safety Update Report. The PSUR summarises the main PMCF findings, and the underlying survey and registry analyses sit in the PMCF evaluation report and the technical documentation behind it.
Related reading
- Post-market surveillance under MDR. What startups actually need to do – the broader PMS system the PMCF methods sit inside.
- Post-market clinical follow-up (PMCF) under MDR. A guide for startups – the pillar post on PMCF as a whole.
- How to write a PMCF plan under MDR Annex XIV Part B – the plan that a survey or registry is one method inside.
- PMCF methods for startups – the methods-by-class breakdown for choosing among study, registry, surveys, literature, and real-world data.
- PMCF study design: prospective studies for post-market data collection – the heavier design when a survey or registry is not enough.
- When is PMCF not required under MDR – the non-applicability decision framework.
- The PMCF evaluation report under MDR – the step-by-step guide to the report that closes the loop.
- PMCF for software as a medical device – when real-world data from the software replaces surveys and registries.
- Minimum viable PMCF for Class IIa startups – the lean PMCF implementation walkthrough.
- The Subtract to Ship framework for MDR compliance – the methodology applied across every MDR chapter, including low-cost PMCF methods.
Sources
- Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 61(11) (PMCF update of clinical evaluation) and Annex XIV Part B (post-market clinical follow-up). Official Journal L 117, 5.5.2017.
- MDCG 2025-10. Guidance on post-market surveillance of medical devices and in vitro diagnostic medical devices. Medical Device Coordination Group, December 2025.
- EN ISO 14155:2020 + A11:2024. Clinical investigation of medical devices for human subjects. Good clinical practice.
This post is part of the Post-Market Surveillance & Vigilance series in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. Surveys and registries are where lean PMCF lives for most startups, and the discipline of the design is what separates an audit-ready low-cost programme from a paper exercise that collapses on first contact with a notified body.