If there is one MDR article that causes more startup pain than any other, it is Article 61. Not because it is badly written — it is actually quite clear. But because the clinical evidence requirements it establishes are significantly more demanding than what many founders expect, especially those coming from the software world where "clinical evidence" was not part of the vocabulary.

Article 61 is the MDR provision that defines what clinical evidence you need for your medical device, how you can generate it, and what happens if you fall short. Getting it right is essential. Getting it wrong means your Notified Body rejects your technical documentation, your timeline collapses, and your investors start asking uncomfortable questions.

What Article 61 Requires

Article 61(1) establishes the fundamental principle: confirmation of conformity with the relevant General Safety and Performance Requirements under normal conditions of the intended use of the device, and the evaluation of the undesirable side-effects and of the acceptability of the benefit-risk ratio, shall be based on clinical data providing sufficient clinical evidence.

Every word matters:

  • Clinical data — not just any data. Clinical data as defined by MDR Article 2(48): safety and/or performance information generated from the use of a device, from clinical investigations, or from other sources reporting on the clinical experience with the device, or from the scientific literature.
  • Sufficient — enough to support the claims. The regulation deliberately does not quantify "sufficient" because what is sufficient depends on the device, its risk class, its intended purpose, and the available evidence base.
  • Clinical evidence — the clinical data and the clinical evaluation results relating to a device, sufficient to verify and validate that the device is safe and achieves the intended clinical benefit(s).

The Three Sources of Clinical Evidence

Article 61 recognizes three sources of clinical data:

1. Clinical Investigation of the Device Itself

A clinical investigation (clinical trial) conducted with your specific device, generating safety and performance data under controlled conditions. This is the gold standard but also the most expensive and time-consuming route.

2. Clinical Investigation or Other Studies of an Equivalent Device

Using clinical data from a device that is demonstrated to be equivalent to yours. This is the equivalence route — you borrow evidence from a device that is technically, biologically, and clinically equivalent.

3. Published Scientific Literature

Using published clinical data — journal articles, systematic reviews, registry data — that is relevant to your device, its intended purpose, and the clinical condition it addresses.

Most devices use a combination of these sources. A purely literature-based clinical evaluation may be sufficient for well-established, lower-risk device types. A novel Class III device almost certainly needs clinical investigation data.

Where Startups Go Wrong

Mistake 1: Assuming Clinical Evaluation Is Optional for Lower-Risk Devices

It is not. Article 61 applies to ALL devices, regardless of class. A Class I device needs a clinical evaluation. A Class IIa software device needs a clinical evaluation. The depth and scope differ, but the requirement is universal.

Some startups — especially in the software space — build their entire technical documentation without a clinical evaluation, assuming it is a Class III requirement. When the Notified Body (or market surveillance authority for Class I) asks for it, they are unprepared.

Mistake 2: Confusing Verification/Validation Testing with Clinical Evidence

Performance testing in a laboratory — bench testing, software verification, algorithm validation against a dataset — is not clinical evidence. Clinical evidence comes from clinical use: the device used by clinicians on patients (or patient data) in a clinical context.

For a software device, validating your algorithm against a curated dataset is verification. Having clinicians use your software in a clinical workflow and measuring diagnostic accuracy, usability, and patient outcomes is clinical evidence.

The line is clear, and Notified Bodies enforce it: "Your bench test data is excellent, but where is the clinical evidence?"

Mistake 3: Relying on Equivalence Without Understanding MDR's Strict Requirements

Under the old MDD, equivalence claims were relatively easy to make. Under MDR, Article 61(4) and (5) have significantly tightened the requirements.

MDR equivalence requires demonstration of equivalence in three dimensions:

Technical equivalence: Similar design, conditions of use, specifications, properties, and methods of use. Same materials, design, energy source, and deployment method.

Biological equivalence: Same materials in contact with the same body tissues, for the same duration and in the same conditions.

Clinical equivalence: Same intended purpose, same clinical condition, same patient population, same site in the body, similar severity and stage of disease, similar user profile.

All three dimensions must be demonstrated. And the evidentiary bar is high:

For Class III and implantable devices, Article 61(5) adds a critical requirement: the manufacturer claiming equivalence must have sufficient levels of access to the data relating to the equivalent device. In practice, this usually means a contractual arrangement with the manufacturer of the equivalent device giving you access to their technical documentation. Without this access, the equivalence claim is generally not accepted for Class III implantable devices.

Tibor sees this trip up startups constantly: "Under MDD, you could claim equivalence to a competitor's device based on publicly available information. Under MDR, for Class III and implantable devices, you essentially need the competitor's cooperation. If your competitor does not give you access to their technical file — and why would they? — you cannot claim equivalence. That means you need your own clinical investigation data."

Mistake 4: Underestimating the Clinical Evaluation Report

The Clinical Evaluation Report (CER) is not a literature review with a conclusion stapled on. It is a structured, systematic assessment following MDCG 2020-13 (or the latest applicable guidance) that:

  • Defines the scope and plan (the Clinical Evaluation Plan)
  • Identifies relevant clinical data through systematic literature searches
  • Appraises the quality and relevance of each piece of data
  • Analyzes the data in relation to the GSPRs
  • Draws conclusions about the device's clinical safety, performance, and benefit-risk profile
  • Identifies residual risks and unanswered clinical questions
  • Defines post-market clinical follow-up (PMCF) activities to address remaining gaps

A well-written CER for a Class IIa device might be 50-100 pages. For a Class III device, it can easily exceed 200 pages. This is a substantial document that requires clinical and regulatory expertise to produce.

Mistake 5: Neglecting Post-Market Clinical Follow-Up

Article 61 is not just about pre-market evidence. It is linked to the post-market clinical follow-up (PMCF) requirements in Annex XIV Part B.

PMCF is the ongoing process of collecting and evaluating clinical data after your device is on the market. It is not optional — it is a mandatory part of the clinical evidence lifecycle. Your PMCF plan must address:

  • Residual risks identified in the clinical evaluation
  • Gaps in clinical evidence that could not be filled pre-market
  • Long-term safety and performance monitoring
  • Real-world effectiveness data

For startups, PMCF planning needs to start during the clinical evaluation, not after market launch. Your Notified Body will review your PMCF plan as part of the conformity assessment.

The Clinical Evidence Decision Tree

Here is a practical decision tree for determining your clinical evidence strategy:

Question 1: Is your device a well-established type with extensive published clinical data? - Yes: Literature-based clinical evaluation may be sufficient. Conduct a systematic literature search, write a thorough CER. - No: Continue to Question 2.

Question 2: Does an equivalent device exist with published clinical data? - Yes: Can you demonstrate technical, biological, and clinical equivalence per MDR requirements? - Yes, and you have data access (or the device is not Class III implantable): Equivalence-based clinical evaluation may work. - No, or you lack data access for Class III implantable: Continue to Question 3. - No: Continue to Question 3.

Question 3: Do you need clinical investigation data? - For Class III devices: Almost certainly yes, unless equivalence is strongly established. - For Class IIb devices: Possibly, depending on the novelty and risk profile. - For Class IIa devices: Rarely, but possible for novel device types with no precedent. - For Class I devices: Very rarely.

If a clinical investigation is needed, plan for it from the earliest stages of product development. Retrospective planning of a clinical investigation is always more expensive and time-consuming than prospective planning.

The Cost and Timeline Reality

Clinical evidence costs vary enormously:

  • Literature-based CER (Class I or well-established Class IIa): EUR 15,000 - 40,000, 2-4 months
  • Literature-based CER with equivalence analysis (Class IIa/IIb): EUR 20,000 - 60,000, 3-6 months
  • Clinical investigation (small single-center study): EUR 100,000 - 300,000, 12-18 months
  • Clinical investigation (multi-center, larger population): EUR 300,000 - 1,000,000+, 18-36 months

These numbers reinforce why clinical evidence strategy must be defined early. Discovering at month 18 that you need a clinical investigation — when you budgeted for a literature review — can be a company-ending financial surprise.

What the Notified Body Looks For

During the conformity assessment, the Notified Body assesses your clinical evidence rigorously. For Class III devices, they assess it in detail and may consult the relevant expert panel per Article 54.

The NB evaluates:

  1. Is the Clinical Evaluation Plan adequate? Does it define the right scope, the right questions, and the right methods?
  2. Is the literature search systematic and reproducible? Did you use defined search terms, defined databases, defined inclusion/exclusion criteria?
  3. Is the data appraisal rigorous? Did you assess the quality and relevance of each data source, not just accept everything at face value?
  4. Is the equivalence claim justified (if applicable)? Is the technical, biological, and clinical equivalence demonstrated with sufficient evidence?
  5. Are the conclusions supported by the data? Does the evidence actually support the safety and performance claims, or are there gaps?
  6. Is the benefit-risk determination explicit and well-reasoned? Is the overall benefit-risk ratio favorable, with residual risks clearly identified?
  7. Is the PMCF plan adequate? Does it address the gaps and residual uncertainties identified in the CER?

Practical Advice for Startups

Define your clinical evidence strategy in the first month of regulatory planning. Not the third month. Not the sixth month. The first month. This strategy drives your budget, your timeline, your testing plan, and potentially your product design.

Engage a clinical evaluation expert. Writing a CER is a specialized skill that combines clinical knowledge, regulatory knowledge, and scientific writing. If you do not have this expertise in-house, engage someone who does. A poorly written CER wastes money twice — once to produce, and once to redo when the NB rejects it.

Build clinical evidence generation into your product development. If you are going to need user studies, performance data from clinical use, or usability data from clinical workflows, design your development process to generate that data naturally. Do not treat clinical evidence as something you collect after the product is finished.

Talk to your Notified Body early about clinical expectations. Pre-submission discussions should include your clinical evidence strategy. The NB can give you early signals about whether your approach is likely to be sufficient.

The Bottom Line

Article 61 is not a technicality — it is the core of MDR's evidence-based approach to device safety. Every device must be supported by sufficient clinical evidence, and what constitutes "sufficient" depends on the device's risk, novelty, and intended use.

For startups, the key lesson is this: clinical evidence is not a box to check at the end of product development. It is a strategic workstream that starts on day one and continues throughout the device's lifecycle. Plan for it, budget for it, resource it, and take it as seriously as your product engineering — because the Notified Body certainly will.