IVDR technical documentation uses the same structural logic as MDR: a device description file plus a post-market file. The substantive change is that clinical evaluation becomes performance evaluation, with a Performance Evaluation Plan and Performance Evaluation Report replacing the clinical equivalents. Roughly 80 percent of the framework is familiar to any team that has built an MDR technical file.

By Tibor Zechmeister and Felix Lenhard.

TL;DR

  • IVDR technical documentation is structured in two parts, paralleling MDR Annex II and Annex III, covering the device file and the post-market file respectively .
  • Roughly 80 percent of the framework overlaps with MDR: intended purpose, classification logic, QMS, risk management, verification and validation, labelling, PMS.
  • The single biggest substantive change: clinical evaluation under MDR Article 61 and Annex XIV becomes performance evaluation under IVDR, with a Performance Evaluation Plan (PEP) and Performance Evaluation Report (PER) .
  • Performance evaluation rests on three pillars: scientific validity, analytical performance, and clinical performance. There are usually no live human subjects, samples are used instead.
  • Teams that have already written MDR technical documentation can reuse most of their QMS, risk file, and labelling machinery for an IVD submission. The specialised work sits in performance evaluation.
  • Startups with combined MDR and IVDR portfolios should not duplicate infrastructure. One QMS, one risk management process, one PMS process, two distinct performance or clinical evaluation tracks.

Why this matters for IVD startups

Tibor has watched at least three IVD startups fall into the same trap: they read the MDR, built a technical file for a medical device, then assumed IVDR would be substantially different. It is not. The framework overlap is roughly 80 percent. Intended purpose. Classification. Conformity assessment. Technical documentation structure. Notified body submissions. Post-market surveillance. Quality management system. All of that machinery is the same shape.

What is genuinely different is smaller than founders fear but more technical than they expect. The headline change is that clinical evaluation under MDR becomes performance evaluation under IVDR. The evidence no longer comes primarily from human subjects in clinical investigations. It comes from samples: blood, saliva, tissue, urine. The question shifts from "does the device perform safely in patients" to "does the device correctly detect, identify, or quantify the target analyte, and what does that mean clinically."

Felix has seen the opposite mistake in his startup coaching work. Founders with an IVD idea assume IVDR must be simpler than MDR because the device never touches a patient in the same way. That assumption collapses on first contact with a notified body. The performance evaluation file can be more demanding than a clinical evaluation, because diagnostic claims are precise and measurable in ways that clinical benefit claims often are not. A sensitivity and specificity claim is falsifiable down to the decimal.

What IVDR actually requires for technical documentation

IVDR technical documentation is laid out in two annexes that parallel the MDR structure . The device file covers device description, intended purpose, design and manufacturing information, general safety and performance requirements (GSPR, the IVD equivalent of MDR Annex I), benefit-risk analysis, product verification and validation, and crucially the performance evaluation documentation. The post-market file covers PMS plan, PMS report or periodic safety update report depending on class, and related vigilance processes.

At the level of headings, an MDR technical file author opening the IVDR annex will recognise most of what they see. The differences sit inside the sections.

First, the GSPR list is IVD-specific. Electrical safety requirements for a benchtop IVD analyser still look like EN 60601-1 territory in practice, but the GSPR framing is written for IVDs. Biocompatibility is framed for IVDs that contact samples rather than patients directly. Stability and traceability requirements for reagents are prominent in a way they simply are not in an MDR file.

Second, the device description has to capture IVD-specific elements: the analyte, the measurement principle, the sample type, the intended user (lab professional, point-of-care clinician, lay user), and the intended test environment. Intended purpose under IVDR carries real weight because it determines classification .

Third, and this is the substantive change, clinical evaluation is replaced by performance evaluation. The documentation required is a Performance Evaluation Plan (PEP) setting out how scientific validity, analytical performance, and clinical performance will be demonstrated, and a Performance Evaluation Report (PER) presenting the results against the plan .

Scientific validity asks: is there a meaningful association between the analyte and the clinical condition the device claims to address. Analytical performance asks: does the device correctly detect or measure the analyte, with what sensitivity, specificity, precision, linearity, and interference profile. Clinical performance asks: in the intended use population, does the device correctly classify patients with and without the condition.

This three-pillar structure has no direct parallel in MDR clinical evaluation. MDR clinical evaluation under Article 61 and Annex XIV runs on clinical data from investigations, equivalence, or literature. IVDR performance evaluation runs on literature for scientific validity, analytical studies on samples for analytical performance, and performance studies for clinical performance. The analytical performance work is often the largest single block of documentation that has no direct MDR counterpart.

A worked example

Consider a Class C IVD startup developing a PCR-based assay for a specific infectious disease marker .

The team has spent eighteen months building what they believe is an MDR-style technical file. They have an ISO 13485 QMS, a risk management file aligned to EN ISO 14971:2019+A11:2021, verification and validation records, and a clinical evaluation plan loosely modelled on MDR Annex XIV.

The notified body kickoff meeting surfaces the gap immediately. The QMS is fine. The risk file needs IVD-specific hazards added: sample mix-up, cross-contamination between reagents, operator exposure to biological material, interference from off-target nucleic acids. These were partially covered but not framed through an IVD lens.

The real rework sits in the clinical evaluation. It has to be restructured as a Performance Evaluation Plan with three pillars. Scientific validity is supportable from published literature: the marker is well-characterised, the association with the clinical condition is established. Analytical performance needs dedicated studies: limit of detection on reference samples, precision across operators and days, cross-reactivity against phylogenetically near-neighbour organisms, interference from common clinical matrix components. Clinical performance needs a performance study using clinical samples, with pre-specified statistical endpoints for sensitivity and specificity against a reference method.

The team's mistake was not effort. They had done serious work. They had assumed the MDR clinical evaluation vocabulary would translate. It did not. The words changed, and more importantly the underlying evidence hierarchy changed. Literature can establish scientific validity but cannot substitute for analytical performance data generated on the actual device.

Time cost of the correction: approximately four to six months. Financial cost: the analytical performance studies alone ran into six figures because of sample sourcing, characterised reference materials, and repeat runs for precision studies.

The Subtract to Ship playbook for IVDR technical documentation

Tibor's practical guidance for teams coming to IVDR from an MDR background or starting fresh looks like this.

Step 1. Reuse the MDR frame where it applies. If the team already has EN ISO 13485:2016+A11:2021 implemented, a risk management process aligned to EN ISO 14971:2019+A11:2021, labelling processes, and PMS machinery, that work carries across to IVDR with modest adaptation. The QMS does not need to be rebuilt. The risk management process does not need a new standard. Do not burn months duplicating infrastructure that the MDR work already delivered.

Step 2. Map the GSPRs explicitly against the IVDR list. Build a GSPR matrix that shows which requirement is met by which document, test, or standard. This is the same discipline as the MDR Annex I matrix . The content changes, the method does not.

Step 3. Build the Performance Evaluation Plan before generating performance data. This is the single piece of advice that saves the most money. A PEP written first defines what scientific validity evidence is needed, what analytical studies will be run with what acceptance criteria, and what clinical performance study will close the loop. Teams that generate data first and write the plan afterwards are structurally at risk of rework because the studies were not scoped to answer the right questions.

Step 4. Separate analytical and clinical performance studies in the plan. These are different in design, different in sample requirements, and different in acceptance criteria. Folding them together in the documentation is a common finding.

Step 5. Accept that the IVD notified body pool is smaller than the MDR pool. Tibor points out that finding an IVD notified body is harder than finding an MDR one because fewer bodies are designated for IVDR. Start the notified body conversation earlier than the team thinks is necessary.

Step 6. Do not over-engineer the post-market file. IVDR Annex III mirrors MDR Annex III in shape . A PMS plan, a vigilance process, and the right report cadence for the device class is the target. Not a 200-page document nobody reads.

Reality Check

  1. Does the team have a written Performance Evaluation Plan that distinguishes scientific validity, analytical performance, and clinical performance as separate work streams with separate acceptance criteria?
  2. Has the GSPR matrix been built against the current IVDR Annex I requirements, or is it still a copied MDR Annex I matrix?
  3. Is the risk management file framed around IVD-specific hazards including sample mix-up, cross-contamination, operator exposure, and interference, or is it a direct copy from an MDR project?
  4. Has the intended purpose been written with enough precision to drive the IVDR classification rule?
  5. Has a notified body conversation been opened, given the smaller IVD notified body pool?
  6. Is the PMS plan scaled to the device class, or has it been copied wholesale from another project?
  7. Does the team have a clear plan for the clinical performance study, including sample sourcing, reference method, and pre-specified statistical endpoints?

Frequently Asked Questions

Is the IVDR technical file structurally different from the MDR technical file? Not at the top level. The two-file structure (device file plus post-market file) follows the same logic as MDR Annex II and Annex III . The difference is inside the sections, mainly in performance evaluation.

Can clinical evaluation documents be reused for performance evaluation? Only as a starting template for scientific validity. Analytical performance and clinical performance have no direct equivalents in an MDR clinical evaluation and must be generated fresh.

Does EN ISO 13485:2016+A11:2021 cover IVDR QMS requirements? Yes, in the same way it covers MDR QMS requirements. The QMS built for an MDR project is reusable for an IVDR project with modest adaptation.

Is performance evaluation cheaper than clinical evaluation? Not necessarily. Analytical performance studies require characterised reference materials, multiple operators, multiple days, and often large sample sets. Clinical performance studies require clinical samples from the intended use population. Costs are comparable and often higher than founders expect.

Does the PEP have to be finalised before generating data? The plan must exist before the pivotal studies begin. Early feasibility work can inform the plan, but the pivotal analytical and clinical performance studies should be run against a locked plan to avoid post-hoc redesign.

How does classification differ from MDR? IVDR has a different classification logic built around what the device diagnoses and what substances it uses . The rule count is smaller than under MDR Annex VIII but the risk-based logic is the same family.

Sources

  1. Regulation (EU) 2017/746 on in vitro diagnostic medical devices (IVDR), consolidated text. Annex I (GSPR), Annex II (technical documentation), Annex III (post-market surveillance technical documentation), Annex VIII (classification rules), Annex XIII (performance evaluation and performance studies). All specific article and annex references in this post flagged as [MDR VERIFY] for co-author confirmation.
  2. Regulation (EU) 2017/745 on medical devices (MDR), consolidated text. Article 61, Annex I, Annex II, Annex III, Annex XIV.
  3. EN ISO 13485:2016+A11:2021. Medical devices, quality management systems.
  4. EN ISO 14971:2019+A11:2021. Medical devices, application of risk management to medical devices.