"Sufficient clinical evidence" is the phrase that haunts every MedTech regulatory professional. MDR uses the word "sufficient" deliberately — it does not prescribe a specific number of patients, a specific study design, or a specific volume of literature. "Sufficient" is context-dependent, device-dependent, and risk-dependent.

This flexibility is intentional. MDR covers everything from simple bandages to AI-driven implantable devices. A one-size-fits-all evidence threshold would be either absurdly high for low-risk devices or dangerously low for high-risk ones.

But for startups, "sufficient" can feel like a moving target. How much is enough? What does the Notified Body actually expect? When will they say "this is not sufficient" and what does that mean for your timeline?

Tibor brings a unique perspective to this question. As a Notified Body lead auditor, he has reviewed clinical evidence packages for dozens of devices. He has said "sufficient" and he has said "not sufficient." The patterns behind those judgments are learnable, and that is what this post is about.

The Framework for "Sufficient"

Sufficiency is not a gut feeling. It is assessed against specific criteria:

1. Does the Evidence Cover All Claims?

Every claim you make about your device — its safety, its performance, its clinical benefits — must be supported by clinical evidence. "Sufficient" starts here: if you claim your diagnostic device has 95% sensitivity, there must be clinical data supporting that number. If you claim your therapeutic device reduces pain scores, there must be clinical data demonstrating that reduction.

The GSPR checklist maps your claims to requirements. The clinical evaluation maps evidence to claims. If any claim is unsupported, the evidence is not sufficient — regardless of how strong the rest of the evidence is.

2. Is the Evidence Quality Adequate?

Not all clinical data is created equal. A randomized controlled trial published in a peer-reviewed journal carries more weight than a case series published in a conference abstract. A prospective clinical investigation with predefined endpoints is stronger than a retrospective chart review.

The clinical evaluation must appraise each piece of evidence for its methodological quality, relevance, and potential for bias. High-quality evidence in small quantities can be more persuasive than large volumes of low-quality evidence.

Tibor's observation: "I have seen CERs that cite 200 papers but when you look at the actual quality and relevance of those papers, only 15-20 are directly relevant and of sufficient methodological quality to support the device's claims. The other 180 are padding. A good CER identifies and focuses on the evidence that matters."

3. Does the Evidence Address the Target Population and Intended Use?

Clinical evidence must be relevant to your specific device, your specific intended purpose, and your specific target patient population. Evidence generated with a different patient population, a different clinical condition, or a different device may be informative but is not directly sufficient.

For example, if your device is intended for pediatric use, clinical evidence exclusively from adult populations is insufficient for the pediatric claim. You need pediatric-specific data — or a well-reasoned justification for why adult data can be extrapolated, supported by literature.

4. Is the Benefit-Risk Profile Explicitly Demonstrated?

Sufficiency is ultimately about the benefit-risk conclusion. The clinical evidence must be enough to determine that the benefits of the device outweigh the risks under the intended conditions of use.

This requires: - Quantified (or at least characterized) clinical benefits - Identified and quantified (or characterized) clinical risks - An explicit comparison showing that benefits outweigh risks - Comparison to the state of the art (alternative treatments/devices)

A collection of data without this synthesis is not "sufficient" because it has not answered the fundamental question.

5. Are the Gaps Identified and Addressed?

Sufficient does not mean perfect. It is acceptable — even expected — for some gaps to remain in the pre-market clinical evidence. What matters is that these gaps are: - Explicitly identified in the CER - Assessed for their impact on the benefit-risk conclusion - Addressed through a post-market clinical follow-up (PMCF) plan

A CER that claims to have no gaps is either for a very well-established device or is dishonest. Notified Body auditors know this and will probe for gaps that the CER does not acknowledge.

What the Auditor Actually Does

When a Notified Body auditor reviews your clinical evidence, they follow a structured assessment:

Step 1: Review the Clinical Evaluation Plan

The auditor starts with your Clinical Evaluation Plan (CEP). Is the scope defined? Are the clinical safety and performance endpoints clear? Is the search strategy for literature adequate? Is the equivalence rationale (if applicable) sound? Is the data appraisal methodology defined?

If the plan is weak, the auditor already expects the execution to be weak. A strong CEP signals a competent team.

The auditor checks: - Were appropriate databases searched (PubMed/MEDLINE, Embase, Cochrane, etc.)? - Were the search terms comprehensive and relevant? - Were inclusion and exclusion criteria predefined and reasonable? - Was the search documented and reproducible? - Were enough relevant sources identified?

A search that only uses PubMed with basic keywords may miss relevant evidence. A search that casts too wide a net may include irrelevant data that dilutes the analysis.

Step 3: Evaluate Data Appraisal

For each piece of clinical data included in the CER, the auditor checks: - Was the methodological quality assessed? - Was the relevance to the device assessed? - Were potential biases identified? - Was each data source weighted appropriately in the analysis?

Auditors are trained to spot CERs that uncritically accept all published data without quality assessment. This is a red flag.

Step 4: Scrutinize the Equivalence Claim (If Applicable)

If you claim equivalence to another device, the auditor will rigorously assess all three dimensions (technical, biological, clinical). They will look for: - Detailed comparison tables showing feature-by-feature equivalence - Justification for any differences and their clinical significance - For Class III implantable devices, evidence of data access from the equivalent device manufacturer

Weak equivalence claims are one of the most common reasons for clinical evidence rejection.

Step 5: Review the Benefit-Risk Analysis

The auditor checks whether the clinical evidence actually supports the benefit-risk conclusion. Is the conclusion traceable to the data? Are the benefits and risks based on evidence rather than assertion? Is the comparison to alternatives fair and complete?

Step 6: Assess the PMCF Plan

Is the PMCF plan designed to address the specific gaps and uncertainties identified in the CER? Or is it a generic plan that could apply to any device? Auditors want to see a PMCF plan that is directly linked to the clinical evaluation findings.

Common Reasons for "Insufficient Clinical Evidence" Findings

Based on Tibor's experience, here are the most common reasons Notified Bodies find clinical evidence insufficient:

1. No Clinical Evaluation at All

Yes, this happens. Startups submit technical documentation without a clinical evaluation, assuming that bench testing and verification data are sufficient. They are not. Article 61 is explicit — clinical data is required.

2. Literature Search Too Narrow

A literature search using only one database, with overly restrictive search terms, that identifies a handful of papers. The auditor cannot be confident that the evidence base has been adequately explored.

3. Weak or Unsupported Equivalence Claim

Claiming equivalence based on surface-level similarity ("both devices are pulse oximeters") without demonstrating detailed technical, biological, and clinical equivalence. The auditor rejects the equivalence, and the borrowed clinical data becomes inadmissible.

4. Claims Not Supported by Evidence

The device's IFU or marketing claims include performance characteristics (accuracy, sensitivity, specificity, clinical outcomes) that are not supported by the cited clinical data. The evidence addresses safety but not performance, or vice versa.

5. Benefit-Risk Conclusion Not Supported

The CER concludes that benefits outweigh risks, but the evidence does not actually demonstrate the claimed benefits quantitatively. Or the risk analysis is incomplete — known risks are not addressed.

6. No PMCF Plan or Generic PMCF Plan

A missing or generic PMCF plan signals that the manufacturer has not thought critically about what remains unknown about their device's clinical performance.

7. Outdated Evidence

Clinical evidence that does not reflect the current state of the art. If your CER relies on literature that is 15 years old and newer evidence exists, the auditor will question whether the evidence reflects current clinical practice.

How to Build Evidence That Passes

Start with the Clinical Evaluation Plan

Write your CEP before you write your CER. The plan defines the strategy. A well-written CEP with clear endpoints, defined search methods, and explicit equivalence criteria (if applicable) makes the CER easier to write and easier for the auditor to assess.

Be Systematic in Literature Searching

Use multiple databases. Use comprehensive search terms. Define inclusion/exclusion criteria upfront. Document everything so the search is reproducible. This is not optional — MDCG 2020-13 provides detailed guidance on literature search methodology.

Appraise Evidence Honestly

Do not include weak evidence just to inflate numbers. Assess each source critically. Exclude irrelevant or low-quality sources with documented justification. The strength of your CER is determined by the quality of its evidence, not the quantity.

Make the Benefit-Risk Explicit

Do not bury the benefit-risk conclusion in a paragraph of text. Make it explicit — ideally in a structured table or framework. State the benefits, state the risks, state the overall conclusion, and trace each element to the supporting evidence.

Plan Your PMCF Proactively

Your PMCF plan should be a natural extension of your CER. Every gap, every uncertainty, every unanswered question in the CER should map to a specific PMCF activity designed to address it.

Engage Clinical Expertise

Writing a CER that passes Notified Body review requires clinical knowledge, regulatory knowledge, and scientific writing skills. If this expertise is not on your team, engage a medical writer or clinical evaluation specialist with experience in MDR CERs.

The Auditor's Mindset

Understanding how auditors think helps you prepare:

Auditors are trained to identify risk. Their primary concern is patient safety. Evidence that demonstrates safety comprehensively will always be viewed more favorably than evidence that focuses only on performance.

Auditors value honesty over completeness. A CER that honestly identifies its gaps and proposes a plan to address them is viewed more favorably than a CER that claims to have no gaps.

Auditors follow the guidance. MDCG guidance documents define the expectations. If you follow MDCG 2020-13 for your CER, you are meeting the auditor's framework. Deviations from the guidance need justification.

Auditors compare across companies. An experienced auditor has reviewed dozens or hundreds of CERs. They know what good looks like. A well-structured, well-evidenced CER stands out — and so does a poorly prepared one.

Tibor summarizes it: "Let me be honest — when I open a CER and I see a clear structure, a systematic approach, honest identification of gaps, and a logical conclusion tied to the data, I know I am working with a competent team. When I see a disorganized collection of literature citations with a vague conclusion that benefits outweigh risks, I know I am going to find problems. The clinical evaluation is the window into a company's regulatory maturity."

The Bottom Line

"Sufficient clinical evidence" under MDR is not a fixed threshold — it is a judgment that depends on your device, its risk class, its novelty, and the quality of the evidence you present. But the judgment is not arbitrary. It follows a structured assessment that Notified Body auditors are trained to perform.

For startups, the path to sufficient clinical evidence starts with a strong Clinical Evaluation Plan, continues through rigorous literature search and data appraisal, and culminates in an honest benefit-risk conclusion with a clear PMCF plan for remaining gaps.

The companies that treat clinical evaluation as a core competency — not an afterthought — are the ones that get through the Notified Body review smoothly.