When a Notified Body reviews a clinical evaluation report, a clinical reviewer applies a structured, repeatable checklist grounded in MDR Article 61 and Annex XIV Part A. The reviewer checks the scope against the intended purpose, walks the four-stage data treatment, tests equivalence claims against MDCG 2020-5, verifies any Article 61(4)-(6) exemption against MDCG 2023-7, maps each clinical claim to a specific Annex I general safety and performance requirement, and finishes with a consistency check against the risk file, the PMS plan, and the PMCF plan. A founder who understands the notified body review clinical evaluation reports sequence can write a CER that anticipates every question before it is asked.

By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.


TL;DR

  • Notified Body clinical reviewers work from a structured checklist anchored in MDR Article 61 and Annex XIV Part A. The review is predictable, not mysterious.
  • The review sequence is fixed: scope and intended purpose first, then CEP traceability, then the four-stage data treatment, then equivalence under MDCG 2020-5, then GSPR mapping, then benefit-risk, then consistency with the risk file and PMS and PMCF plans.
  • Equivalence claims are tested against all three MDCG 2020-5 dimensions. Technical, biological, clinical. And against the "sufficient levels of access" requirement clarified in MDCG 2023-7 for implantable and Class III devices.
  • Common clinical nonconformities cluster around five patterns: scope drift, reverse-engineered appraisal, weak equivalence, missing GSPR traceability, and inconsistency with the risk file.
  • A Graz-based company saved EUR 400,000-500,000 and 12-18 months by building the CER around the reviewer's sequence and defending each claim with existing harmonised standards rather than defaulting to new clinical investigations.

The view from the reviewer's chair

The clinical reviewer opens the file at eight in the morning. The device is new to them. The manufacturer is new to them. The intended purpose is a sentence they have never read before. They have a day, sometimes two, to decide whether the clinical evidence supports the safety and performance claims of the device. They are not hostile. They are methodical. Their job is to build a defensible assessment that will survive internal review inside the notified body, and they do it by running the same checklist they run on every file.

If you have read our post on the notified body auditor perspective (053), you already know the mental model: audits are not policing, they are a collaborative mechanism to produce safer medical devices, and the reviewer rewards traceability over volume. The clinical review of a CER is the same principle applied to the clinical file. The reviewer wants the file to hold together. A CER built around the reviewer's sequence is cheaper to write than one that has to be rebuilt after the first round of findings.

This post is the checklist, section by section, with the reasoning behind each step and the common failure patterns the reviewer looks for.

What auditors look for first

The reviewer does not start with the literature search. They start with the intended purpose. The intended purpose, as it appears on the label and in the instructions for use, is the anchor for everything that follows. If the CER claims to evaluate a device for a population that the IFU does not name, or for indications the label does not state, the review is over before it begins.

The second thing the reviewer checks is the risk class and the classification rule from Annex VIII. The clinical evidence burden scales with class. A Class III implantable requires a fundamentally different body of evidence than a Class IIa accessory. If the class in the CER does not match the class in the technical documentation, the reviewer flags it immediately.

The third thing the reviewer looks for is the clinical evaluation plan. Under MDR Annex XIV Part A, the clinical evaluation must follow a defined and methodologically sound procedure, and that procedure is written down in the CEP before the data is collected. A CER without a referenced CEP is a CER without a method. The reviewer will ask for the CEP, and if it does not exist, or if it was written after the data was collected, the methodological foundation of the entire CER is compromised.

These three anchors. Intended purpose, classification, CEP. Take the reviewer fifteen to thirty minutes and set the mental model for the rest of the review. Everything that follows is confirmation or correction of the picture formed in this first block.

The structural checks

The structural pass confirms that the CER contains the sections Annex XIV Part A requires. There is no mandatory template. MDR Annex XIV Part A specifies content, not chapter numbering. But the required elements are fixed.

The reviewer confirms the CER documents scope, the state of the art and currently available alternative treatment options, the data identified from literature, the data identified from equivalence where claimed, the data identified from clinical investigations, the appraisal of the data, the analysis against the relevant general safety and performance requirements in Annex I, the benefit-risk determination, the conclusions on safety and performance, the PMCF plan reference, and the qualifications of the evaluators. A missing section is a finding. A present-but-empty section is also a finding. The reviewer reads what is in each section, not only the headings.

The reviewer also confirms version control. The CER must have a clear version history, a clear date, and a clear link to the version of the CEP it was executed against. A CER without version control cannot be updated under Article 61(3), and the reviewer will raise this immediately.

The methodological checks

The methodological pass walks the four-stage data treatment: scoping, identification, appraisal, and analysis. This is the intellectual backbone of the CER and the reviewer walks it in order.

On scoping, the reviewer confirms that the scope in the CER matches the scope in the CEP. Scope drift between the CEP and the CER. The CEP addresses one indication, the CER addresses a broader one. Is the single most common methodological finding in first-cycle CERs.

On identification, the reviewer confirms that the literature search is reproducible. Databases, search strings, dates, inclusion and exclusion criteria, and the PRISMA-style flow must be present and consistent. The reviewer will not re-run the search, but they will sample it. If a sampled search string produces different results than the CER reports, the reviewer loses confidence in the identification stage entirely.

On appraisal, the reviewer confirms that every included data set is appraised against the pre-specified criteria from the CEP. The key word is pre-specified. Appraisal criteria that appear in the CER for the first time, written in a way that happens to fit the data that was found, fail the methodological test under Article 61(2) and Annex XIV Part A. The reviewer will compare the appraisal criteria in the CEP against the ones applied in the CER, and any gap between the two is a finding.

On analysis, the reviewer confirms that the synthesis is traceable from the appraised data through the claims. Conclusions that do not trace to a specific appraised data set are findings. This is where CERs that are structurally complete fall apart on methodological review.

The data appraisal checks

The data appraisal pass is where the reviewer reads the evidence itself and tests whether the appraisal is honest. Three specific patterns get close attention.

First, equivalence claims. If the CER relies on equivalence, the reviewer tests the claim against MDCG 2020-5 (April 2020). Equivalence under MDCG 2020-5 requires simultaneous demonstration across all three dimensions. Technical, biological, and clinical. With no clinically significant differences on any of them. The reviewer will read the side-by-side comparison and look for hand-waving on any dimension. A CER that demonstrates technical equivalence in detail and dismisses biological and clinical equivalence in a paragraph will not pass.

The reviewer also tests the "sufficient levels of access to data" requirement. Under MDCG 2023-7 (December 2023), equivalence claims for implantable and Class III devices require documented access to the clinical data of the equivalent device, typically through a contract between manufacturers. A CER that claims equivalence to a competitor's device without any documented access arrangement will be rejected on this specific point, no matter how strong the side-by-side comparison is.

Second, literature quality. The reviewer samples a handful of included studies and reads them. If a study is cited as supporting a specific claim, the reviewer checks whether the study actually supports that claim. Citation mismatches. Where a study is cited for a conclusion it does not reach. Are a common and serious finding.

Third, clinical investigation quality. Investigations cited in the CER must be conducted under EN ISO 14155:2020+A11:2024 where the investigation falls within its scope, and the investigation plan and final report versions must be named. An investigation summarised without its plan version and final report reference looks improvised to the reviewer.

The conclusion checks

The conclusion pass covers GSPR mapping and benefit-risk. This is where the CER either earns its conclusions or does not.

The reviewer expects to see a GSPR traceability table. The relevant clinical general safety and performance requirements from Annex I are listed in one column, and the specific appraised data that supports each one is listed in the next. A CER that presents a large body of data and concludes that the device is safe without mapping the data to specific GSPRs fails on this step. The reviewer cannot see which data supports which requirement, and the reviewer is not obliged to reconstruct the mapping for you.

The reviewer then reads the benefit-risk determination. The determination must be traceable to the appraised data and consistent with the risk management file under EN ISO 14971:2019+A11:2021. Residual risks identified in the risk file must be acknowledged in the CER and weighed against the clinical benefits. A benefit-risk section that does not name residual risks is a benefit-risk section that has not been done.

Finally, the reviewer reads the PMCF plan reference and checks that the PMCF plan addresses the clinical uncertainties the CER raised. A CER that concludes with no uncertainties and a PMCF plan that proposes no activities is a CER the reviewer will not believe. Every device has clinical uncertainty. The honest move is to name it and let the PMCF plan close it.

Common nonconformities

Five patterns show up again and again in failed CER reviews. Each one is avoidable if the authors anticipate the reviewer's checklist.

Scope drift. The CEP says one thing, the CER says another, the label says a third. The fix is to freeze the intended purpose in the CEP, copy the exact wording into the CER, and verify that the label and IFU match.

Reverse-engineered appraisal. Appraisal criteria back-filled after the data was read. The fix is to pre-specify the criteria in the CEP, sign and date the CEP before data collection begins, and apply the criteria unchanged in the CER.

Weak equivalence under MDCG 2020-5. Technical equivalence demonstrated in detail, biological and clinical equivalence hand-waved, sufficient-levels-of-access requirement ignored. The fix is either to build the full three-dimensional argument with documented data access per MDCG 2023-7 where applicable, or to drop the equivalence claim and rely on literature and clinical investigations.

Missing GSPR traceability. Large body of data, no mapping to specific Annex I clinical GSPRs. The fix is a traceability table that the reviewer can read in five minutes.

Inconsistency with the risk file and PMS plan. The CER reads well in isolation but contradicts the risk file or the PMS plan when read together. The fix is to build all four documents. CER, risk file, PMS plan, PMCF plan. In the same review cycle and cross-check them explicitly before the file goes to the notified body.

How to anticipate the review

Anticipating the review is not gamesmanship. It is writing for the reader. The reader runs the checklist above. The CER that is written with the checklist in mind passes on the first cycle. The CER that is written around internal project history passes on the third cycle, or not at all.

Three concrete moves make the difference. First, write the CEP before the data collection starts, sign it, date it, and reference it in every section of the CER. Second, build the GSPR traceability table before you write the conclusions. If you cannot fill the table, you cannot write the conclusions. Third, cross-check the CER against the risk file, the PMS plan, and the PMCF plan in a single review session before the file is submitted.

One more move, the one founders find hardest: keep the CER short enough to read. A 250-page disciplined CER beats a 600-page narrative. Reviewers do not reward volume. They reward traceability, methodology, and honesty.

The Subtract to Ship angle

There is a Graz-based company we worked with that had an innovative technology using two established measurement methods. Their initial clinical evidence plan defaulted to two to three new clinical investigations. Years of work, hundreds of thousands of euros. The Evidence Pass of the Subtract to Ship framework (065) asked a different question: do existing harmonised standards cover the established measurement methods? They did. Because the measurement methods were well-established and covered by recognised standards, clinical investigations were not required for those aspects of the device. The CER was built around the reviewer's sequence, the literature and standards route was defended section by section, and the notified body accepted the approach. The company saved EUR 400,000-500,000 and 12-18 months of development time.

The lesson for CER construction is the same as the lesson for the broader regulatory plan. Default to the cheapest legitimate pathway. Literature first. Equivalence under MDCG 2020-5 where it genuinely applies and where MDCG 2023-7 access is documented. Harmonised standards where they cover the clinical question. New clinical investigation under EN ISO 14155:2020+A11:2024 only for the gaps the first three cannot close. Every section of the CER traces to a specific MDR article, annex paragraph, or MDCG provision. Everything else comes out. What remains is a CER the reviewer can read in a day and defend in an internal review.

Reality Check. Where do you stand?

  1. Does your CER reference a pre-specified CEP that was signed and dated before data collection began?
  2. Is the intended purpose in the CER identical, word for word, to the label and the IFU?
  3. Do you have a GSPR traceability table mapping each relevant Annex I clinical GSPR to the specific appraised data supporting it?
  4. If you claim equivalence, does your analysis address technical, biological, and clinical dimensions in full under MDCG 2020-5?
  5. For equivalence claims on implantable or Class III devices, do you have documented sufficient levels of access to the equivalent device's clinical data under MDCG 2023-7?
  6. Are all cited clinical investigations conducted and reported under EN ISO 14155:2020+A11:2024, with investigation plan version and final report referenced?
  7. Does the CER explicitly cross-reference the risk file under EN ISO 14971:2019+A11:2021 and the PMS and PMCF plans?
  8. Does your PMCF plan name the specific clinical uncertainties the CER raised and define activities proportionate to them?
  9. Could a notified body reviewer walk your CER in the sequence described above and trace every conclusion to a specific source without asking a question?

Frequently Asked Questions

How do notified bodies review clinical evaluation reports? Notified Body clinical reviewers follow a structured sequence anchored in MDR Article 61 and Annex XIV Part A. They start with the intended purpose, the risk class, and the clinical evaluation plan, then walk the four-stage data treatment (scoping, identification, appraisal, analysis), test equivalence claims against MDCG 2020-5 and MDCG 2023-7, verify GSPR traceability against Annex I, and finish with a consistency check against the risk file and the PMS and PMCF plans.

What is the most common reason a CER fails notified body review? In our experience, the combination of missing GSPR traceability and reverse-engineered appraisal criteria. The CER presents a body of data, concludes that the device is safe, and never shows which specific clinical GSPR each piece of data supports, and the appraisal criteria are written in a way that happens to fit the data that was found. Both are methodological failures under Article 61(2) and Annex XIV Part A.

How do reviewers test equivalence claims? Reviewers test equivalence against MDCG 2020-5, which requires simultaneous demonstration across technical, biological, and clinical dimensions with no clinically significant differences. For implantable and Class III devices, MDCG 2023-7 adds the requirement for documented sufficient levels of access to the equivalent device's clinical data, typically through a contract between manufacturers. A CER that hand-waves any of these requirements will not pass.

Do notified body reviewers re-run the literature search? Typically no, but they sample it. Reviewers test a handful of search strings against the databases cited to confirm the search is reproducible, and they sample a handful of included studies to confirm the appraisal in the CER reflects what the studies actually say. Any mismatch on either sample produces a finding and triggers deeper scrutiny of the identification and appraisal stages.

How long does a notified body clinical review take? A focused clinical reviewer typically spends one to two working days on a single CER for a mid-complexity device. A well-built, traceable CER can be reviewed cleanly in that window. A poorly structured or inconsistent CER cannot, and the review either returns findings or extends into additional cycles. The difference is not reviewer speed. It is CER structure.

Can a CER be reviewed before it goes to the notified body? Yes, and it should be. An internal mock review that applies the reviewer's sequence. Scope, classification, CEP, four-stage data treatment, equivalence, GSPR mapping, benefit-risk, risk and PMS and PMCF consistency. Will surface most of the findings a notified body would raise, at a fraction of the cost of discovering them after submission.

Sources

  1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 61 (clinical evaluation) and Annex XIV Part A (clinical evaluation). Official Journal L 117, 5.5.2017.
  2. MDCG 2020-5. Clinical Evaluation. Equivalence: A guide for manufacturers and notified bodies, April 2020.
  3. MDCG 2023-7. Guidance on exemptions from the requirement to perform clinical investigations pursuant to Article 61(4)-(6) MDR and on 'sufficient levels of access' to data needed to justify claims of equivalence, December 2023.
  4. EN ISO 14155:2020 + A11:2024. Clinical investigation of medical devices for human subjects. Good clinical practice.
  5. EN ISO 14971:2019 + A11:2021. Medical devices. Application of risk management to medical devices.

This post is part of the Clinical Evaluation & Clinical Investigations series in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. The clinical review of a CER is predictable, structured, and built around a checklist that any prepared founder can anticipate. Writing the CER around the reviewer's sequence is the single cheapest move in the entire clinical evaluation workstream.