The ten clinical evaluation mistakes below show up in almost every MDR startup audit we see: a missing or weak clinical evaluation plan, a literature search run without a protocol, an equivalence claim that ignores the three pillars required by Annex XIV, no data appraisal, state of the art not addressed, an unsupported benefit-risk conclusion, a missing PMCF plan, Article 61(10) proportionality ignored, clinical data that does not match the intended purpose, and a CER that never gets updated when post-market signals come in. Each one is anchored to the specific MDR article or annex it violates, and each one has a concrete fix.
By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.
TL;DR
- The most common clinical evaluation mistake is not a bad CER. It is the absence of a clinical evaluation plan (CEP) before the work starts. Annex XIV Part A, Section 1(a) makes the plan mandatory.
- Equivalence under MDR is not what it was under the MDD. MDCG 2020-5 requires technical, biological, and clinical equivalence to be demonstrated together, with sufficient access to the comparator device data per Article 61(5) and MDCG 2023-7.
- A literature search without a documented protocol is an automatic finding. The protocol, the databases, the search strings, and the appraisal criteria all belong in the CEP before any search is run.
- Clinical evaluation and post-market surveillance are one loop, not two departments. A CER that is not updated when Article 83 PMS signals come in is non-compliant under Article 61(11).
- The Graz device that saved EUR 400–500K in clinical investigation costs did so by checking for existing standards and literature first. Most startups skip that check and pay the price.
Why clinical evaluation is where startups bleed
Clinical evaluation is the single largest source of cost overruns and Notified Body findings for MedTech startups. It is also the area where founders most often believe they are doing the right thing while quietly violating the Regulation. Unlike a missing signature on a declaration of conformity, clinical evaluation mistakes are rarely visible until the Notified Body reviewer opens the CER. At which point the founder has already spent months and tens of thousands of euros on work that does not hold up.
Tibor has reviewed hundreds of clinical evaluation reports as a Notified Body Lead Auditor. The same ten mistakes appear again and again. They are not exotic. They are not edge cases. They are the baseline pattern. If you are preparing a CER for the first time, reading this list before you start will save you more money than any other single thing you can do this week.
Each mistake below is named, anchored to the article or annex it violates, and paired with the fix.
Mistake 1: No clinical evaluation plan before the work starts
What happens: The startup jumps straight to the clinical evaluation report. There is no upstream plan defining scope, questions, methods, appraisal criteria, or acceptance criteria. The report is written around whatever evidence the team happened to find.
What MDR says: Annex XIV Part A, Section 1(a) requires the manufacturer to establish and update a clinical evaluation plan. The plan must define the intended purpose, intended target groups, intended clinical benefits with specific measurable outcomes, methods for the assessment of qualitative and quantitative aspects of clinical safety, the parameters to determine the acceptability of the benefit-risk ratio, and how benefit-risk issues relating to specific components will be addressed. MDR Article 61(3) makes the plan mandatory for all devices.
The fix: Write the clinical evaluation plan first. Before any literature search, before any equivalence analysis, before any draft CER. The plan is the contract the evaluator signs with themselves. Without it, everything downstream is unauditable.
Mistake 2: Literature search run without a documented protocol
What happens: Someone runs a PubMed search, exports the results, reads the abstracts, picks the ones that look relevant, and writes them into the CER. No documented databases, no search strings, no inclusion or exclusion criteria, no PRISMA-style flow, no appraisal.
What MDR says: Annex XIV Part A, Section 1(c) requires the manufacturer to identify available clinical data relevant to the device and its intended purpose through a systematic scientific literature review. Section 1(d) requires appraisal of all relevant clinical data by evaluating their suitability. A search without a documented protocol cannot be systematic, and what cannot be systematic cannot be appraised.
The fix: The literature search protocol is a section of the CEP, not a paragraph in the CER. It specifies the databases searched (Medline, Embase, Cochrane, others), the search strings by device characteristic and clinical claim, the date range, language filters, inclusion criteria, exclusion criteria, the appraisal rubric, and the flow from total hits to included studies. Run the protocol, document each step, keep the raw exports.
Mistake 3: Equivalence claimed without the three pillars
What happens: The startup finds a similar device on the market, maps a few features across, and claims clinical equivalence in the CER. The comparator's clinical literature becomes the startup's clinical evidence.
What MDR says: Article 61(4)–(6) and Annex XIV Part A, Section 3 require equivalence to be demonstrated across three characteristics simultaneously: technical, biological, and clinical. MDCG 2020-5 (April 2020) is the definitive guidance and spells out what each pillar requires. For example, identical principles of operation and similar specifications on the technical side, same materials or substances in contact with the same tissues on the biological side, same clinical condition with same intended purpose at the same site in similar populations on the clinical side. For implantable and Class III devices, Article 61(5) additionally requires the manufacturer to have a contract in place giving sufficient access to the technical documentation of the equivalent device on an ongoing basis. MDCG 2023-7 (December 2023) clarifies what "sufficient levels of access" means.
The fix: Do not start with equivalence. Start with the question of whether your device even qualifies for an equivalence claim under MDCG 2020-5. Most do not. If it does, demonstrate all three pillars with evidence, not assertion, and for implantable or Class III, secure the Article 61(5) contract before you write the CER.
Mistake 4: No data appraisal. All sources treated as equal
What happens: Fifteen papers go into the CER. A peer-reviewed RCT and a vendor-sponsored case series and a trade-journal opinion piece all carry the same weight in the conclusion.
What MDR says: Annex XIV Part A, Section 1(d) requires the manufacturer to appraise all relevant clinical data by evaluating their suitability for establishing the safety and performance of the device. Appraisal means weighting by methodological quality, relevance to the intended purpose, and scientific validity. Without appraisal, the CER's conclusion is not traceable to the underlying evidence.
The fix: Use an appraisal rubric in the CEP. For example, the framework from MEDDEV 2.7/1 Rev. 4, which remains a widely accepted methodological reference. Score each included paper on methodological quality and relevance, and let the weighted evidence. Not the raw count of hits. Drive the conclusion.
Mistake 5: State of the art not addressed
What happens: The CER evaluates the device in isolation, compared only to itself. There is no discussion of alternative treatments, the current clinical standard of care, or the available therapeutic options the device is supposed to improve upon or replace.
What MDR says: Annex XIV Part A, Section 1(a) requires the clinical evaluation plan to include an indication of how current knowledge and the state of the art will be addressed. Article 61(1) requires that the clinical evaluation confirms conformity with the relevant general safety and performance requirements "under the normal conditions of the intended use of the device" and evaluates undesirable side effects and the acceptability of the benefit-risk ratio. And those determinations cannot be made in a vacuum. The benefit of a device is meaningful only against the alternative.
The fix: Dedicate a section of the CEP and the CER to state of the art: what the current clinical pathway looks like, what alternatives exist, how they perform, and where your device fits. Pull it from guidelines (ESC, NICE, equivalent), systematic reviews, and consensus statements. Keep it current. State of the art moves.
Mistake 6: Benefit-risk conclusion unsupported by the data
What happens: The CER's final paragraph asserts a positive benefit-risk ratio. The sentences above it do not actually demonstrate one. The quantified benefits are vague, the residual risks are underweighted, and the comparison to the state of the art is missing.
What MDR says: Article 61(1) requires the clinical evaluation to confirm the acceptability of the benefit-risk ratio referred to in Sections 1 and 8 of Annex I. Annex I General Safety and Performance Requirement 1 requires that any risks which may be associated with the use of the device constitute acceptable risks when weighed against the benefits to the patient, and that the overall residual risk is acceptable. Annex XIV Part A, Section 1(a) requires the plan to set measurable clinical benefits with specific outcomes and acceptability parameters for the benefit-risk ratio. A conclusion that does not quantify benefit against residual risk in those terms is unsupported.
The fix: Define measurable clinical benefits in the CEP (not at the end). Define acceptance criteria for the benefit-risk ratio before the evaluation runs. Tie the risk side to the risk management file per EN ISO 14971:2019+A11:2021. The final paragraph of the CER should then write itself because the work is already done.
Mistake 7: No PMCF plan. Or a PMCF plan that is a placeholder
What happens: The CER exists. The post-market clinical follow-up plan is either missing or reduced to a one-paragraph "we will monitor complaints and update this document annually."
What MDR says: Article 61(11) requires the clinical evaluation and its documentation to be updated throughout the life cycle of the device concerned with clinical data obtained from the implementation of the manufacturer's PMCF plan referred to in Part B of Annex XIV and the post-market surveillance plan referred to in Article 84. Annex XIV Part B sets out the PMCF plan requirements in detail: the general methods and procedures of PMCF, the specific methods and procedures of PMCF, a rationale for their appropriateness, a reference to relevant parts of the CER and risk management, the specific objectives, an evaluation of clinical data from similar devices, and reference to any relevant Common Specifications and harmonised standards. A one-paragraph PMCF plan does not meet this.
The fix: Treat the PMCF plan as a first-class deliverable, written in parallel with the CEP. Use Annex XIV Part B as the literal checklist. If you cannot justify why a specific PMCF method is or is not appropriate for your device, you do not have a PMCF plan yet.
Mistake 8: Article 61(10) proportionality claimed but not justified
What happens: The startup believes clinical data are not appropriate for the device. Usually because the device is low-risk or has a long track record in another form. And relies on Article 61(10). The CER references Article 61(10) in a sentence. There is no documented justification.
What MDR says: Article 61(10) permits, where demonstration of conformity with general safety and performance requirements based on clinical data is not deemed appropriate, any such exception to be based on the result of the manufacturer's risk management and on consideration of the specifics of the interaction between the device and the human body, the clinical performance intended and the claims of the manufacturer. The adequacy of demonstration of conformity based on non-clinical methods alone must be duly substantiated in the technical documentation. Article 61(10) does not apply to implantable devices and Class III devices except under the conditions of Article 61(4)–(6). Invoking 61(10) without documented substantiation is a finding waiting to happen.
The fix: If you are relying on Article 61(10), write the substantiation as a standalone section of the technical documentation, grounded in the risk management file and the non-clinical evidence. Be explicit about why clinical data are not appropriate. Be explicit about how the non-clinical methods cover the GSPRs. Expect the Notified Body to probe every sentence.
Mistake 9: Clinical data mismatched to the intended purpose
What happens: The CER pulls in a body of literature that is close. But not matched. To the intended purpose. The comparator population differs. The indication differs. The measurement site differs. The conclusion is then transferred across the gap without acknowledging it.
What MDR says: Article 2(44) defines clinical evaluation as a systematic and planned process to continuously generate, collect, analyse and assess the clinical data pertaining to a device in order to verify the safety and performance, including clinical benefits, of the device when used as intended by the manufacturer. "When used as intended by the manufacturer" is the binding phrase. Clinical data that do not match the intended purpose are not clinical data for the purposes of Article 61. Annex XIV Part A, Section 1(c) requires identification of clinical data relevant to the device and its intended purpose.
The fix: Before any literature is included, rewrite the intended purpose statement precisely. Target population, clinical condition, site of use, user, duration. Match every included paper against that statement. Exclude what does not match, even if it is tempting. A smaller body of genuinely matched data is worth more than a large body of adjacent data.
Mistake 10: CER not updated on PMS signals
What happens: The CER is written, signed, filed. Six months later, the post-market surveillance system starts producing complaints, trends, or PMCF data. The CER sits untouched until the next scheduled review.
What MDR says: Article 61(11) requires the clinical evaluation and its documentation to be updated throughout the device life cycle with data from PMCF and PMS. Article 83 requires the manufacturer's PMS system to feed the clinical evaluation. For Class III and implantable devices, Article 61(11) specifies that the CER and, as applicable, the summary of safety and clinical performance referred to in Article 32 shall be updated at least annually with data from PMCF and PMS. For other classes, the update frequency is defined in the PMS plan and shall be proportionate. "Proportionate" does not mean "never."
The fix: Define CER update triggers in the PMS plan. Not just time-based, but signal-based. New complaint trends, PMCF interim results, vigilance events, changes to state of the art, new harmonised standard versions. Each trigger should route to a defined CER update procedure with a defined owner.
The Graz flip side: when existing standards save the investigation
The most expensive clinical evaluation mistake is not one of the ten above. It is the one that happens before any of them: not checking whether the clinical evidence problem has already been solved by an existing harmonised standard or a well-established body of literature.
A Graz company Tibor worked with was budgeting for a full clinical investigation under Articles 62–82 and EN ISO 14155:2020+A11:2024. The estimate landed between EUR 400,000 and EUR 500,000. Real money for a startup, and realistic given what clinical investigations cost under the updated 14155 framework. Before committing, they took one week to do what most founders skip: a structured check of whether the relevant GSPRs could be demonstrated through existing published clinical data, existing harmonised standard conformity, and a tightly scoped PMCF plan instead of a new prospective investigation.
The answer was yes. A clinical evaluation built on well-appraised existing literature, risk management per EN ISO 14971:2019+A11:2021, and a rigorous PMCF plan met Annex XIV Part A without a new clinical investigation. The savings were the EUR 400–500K they would otherwise have spent. Not because they cut corners. Because they checked the map before walking into the forest.
The mirror-image mistake is the one Tibor sees constantly: founders who commit to a clinical investigation because they never checked whether existing standards and literature already cover what they need. The fix is a one-week upfront check, run by someone who knows what they are looking at.
The Subtract to Ship approach to clinical evaluation
The subtract-first question for clinical evaluation is not "what evidence should we generate?" It is "what is the minimum clinical evidence stack that satisfies Article 61 and Annex XIV for this specific intended purpose, and what can we safely remove?" The answer almost always starts with three moves:
- Tighten the intended purpose until it is precise enough to match a defensible body of existing clinical data. Wide intended purposes multiply the evidence burden.
- Check existing harmonised standards and published literature before scoping any new clinical investigation. EN ISO 14155:2020+A11:2024 applies if an investigation is needed. And it is the floor, not the ceiling, of cost.
- Build the CEP, the CER, and the PMCF plan as one integrated document family, owned by one person, updated together. Fragmenting them across teams or consultants is how the ten mistakes above happen.
None of this is corner-cutting. All of it is the MDR applied with discipline. Annex XIV rewards discipline. It punishes improvisation.
Reality Check. Where do you stand?
- Do you have a written clinical evaluation plan that predates your clinical evaluation report, with acceptance criteria for the benefit-risk ratio?
- Is your literature search protocol documented with databases, search strings, and appraisal rubric. And can you reproduce the search today?
- If you are claiming equivalence, can you produce evidence for all three pillars (technical, biological, clinical) and, if implantable or Class III, the Article 61(5) contract?
- Does your CER contain an explicit state-of-the-art section tied to current clinical guidelines?
- Does your benefit-risk conclusion quantify benefit against residual risk using the acceptance criteria from the CEP?
- Is your PMCF plan built against Annex XIV Part B as a checklist. Or is it a placeholder paragraph?
- If you are relying on Article 61(10), is the substantiation a standalone documented section of the technical documentation?
- Does every included paper in the CER match your precise intended purpose. Population, condition, site, user, duration?
- Does your PMS plan define signal-based triggers that update the CER, not just a time schedule?
- Did you check for existing standards and literature before scoping any new clinical investigation?
If you answered "no" or "not sure" to three or more, your CER is unlikely to survive Notified Body review in its current form. Fix the plan before you fix the report.
Frequently Asked Questions
What is the single most common clinical evaluation mistake? The absence of a clinical evaluation plan before the CER is written. Annex XIV Part A, Section 1(a) makes the plan mandatory, and its absence makes everything downstream unauditable. This mistake shows up in the majority of first-time startup CERs Tibor reviews.
Can a startup still use equivalence under MDR? Yes, but the bar is higher than under the MDD. MDCG 2020-5 requires all three pillars. Technical, biological, and clinical. To be demonstrated, and for implantable and Class III devices, Article 61(5) requires a contract giving ongoing access to the comparator's technical documentation. MDCG 2023-7 clarifies what "sufficient levels of access" means. Most equivalence claims Tibor sees do not meet this bar.
Does Article 61(10) mean my low-risk device does not need clinical data? Article 61(10) allows exceptions where clinical data are not deemed appropriate, but it requires the decision to be substantiated in writing based on risk management, the device-human interaction, and the claimed clinical performance. It does not apply to implantable and Class III devices outside the conditions of Article 61(4)–(6). Invoking 61(10) without a documented substantiation is a non-conformity.
How often does the CER need to be updated? Article 61(11) requires continuous updating throughout the device life cycle. For implantable and Class III devices, the update must happen at least annually. For other classes, the frequency must be proportionate and defined in the PMS plan. But signal-based triggers (complaints, PMCF results, vigilance, state-of-the-art changes) should always prompt an update regardless of schedule.
Is MEDDEV 2.7/1 Rev. 4 still usable under MDR? The MDR text takes precedence where it diverges from MEDDEV 2.7/1 Rev. 4. That said, MEDDEV 2.7/1 Rev. 4 remains a widely accepted methodological reference for the four-stage clinical evaluation process and data appraisal rubric, and it is referenced alongside MDCG 2020-5 in day-to-day practice. Use it as a methodological guide, not as the regulatory source of truth.
Related reading
- Sufficient Clinical Evidence Under MDR – how to decide how much evidence is enough for your device (post 042).
- Do You Really Need a Clinical Investigation? – the upfront check that saved the Graz company EUR 400–500K (post 041).
- 15 MDR Myths That Waste Startup Time and Money – the myth-removal companion to this post (post 061).
- What Is Clinical Evaluation Under MDR? The Foundation – the hub post for this category (post 111).
- How to Write a Clinical Evaluation Plan Under MDR – the step-by-step CEP build (post 112).
- Equivalence Under MDR: The Three Pillars Explained – deep dive on MDCG 2020-5 (post 115).
- PMCF Plans Under MDR: Annex XIV Part B Walkthrough – how to build a real PMCF plan (post 120).
- Article 61(10) Exceptions: When Clinical Data Is Not Required – the honest take on 61(10) (post 127).
- CER Update Triggers and PMS Integration – closing the Article 61(11) loop (post 159).
- Clinical Investigations Under EN ISO 14155:2020+A11:2024 – what a real clinical investigation actually costs (post 150).
- The Subtract to Ship Framework for MDR – the methodology this post sits inside (post 065).
Sources
- Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 2(44) (clinical evaluation definition), Article 2(48) (clinical evidence), Article 61(1)–(6), Article 61(10), Article 61(11), Article 83 (post-market surveillance system), Annex XIV Part A (clinical evaluation), Annex XIV Part B (post-market clinical follow-up). Official Journal L 117, 5.5.2017.
- MDCG 2020-5. Clinical Evaluation. Equivalence: A guide for manufacturers and notified bodies, April 2020.
- MDCG 2023-7. Guidance on exemptions from the requirement to perform clinical investigations pursuant to Article 61(4)–(6) MDR and on "sufficient levels of access" to data needed to justify claims of equivalence, December 2023.
- EN ISO 14155:2020 + A11:2024. Clinical investigation of medical devices for human subjects. Good clinical practice.
- EN ISO 14971:2019 + A11:2021. Medical devices. Application of risk management to medical devices.
- MEDDEV 2.7/1 revision 4. Clinical Evaluation: A Guide for Manufacturers and Notified Bodies under Directives 93/42/EEC and 90/385/EEC, June 2016 (methodological reference only; MDR text takes precedence).
This post is part of the Clinical Evaluation & Clinical Investigations series in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. Every mistake on this list comes from real Notified Body review findings. And every fix has been walked through with founders who needed it yesterday.