DiGA positive care effects evidence sits above MDR clinical evaluation, not beside it. MDR Article 61 asks whether the product performs as intended and whether the benefit-risk ratio is acceptable. DiGA asks whether the product delivers a quantitative and qualitative benefit to the German statutory health insurance system, measured through a medical benefit or a patient-relevant structural and procedural improvement. The same startup can satisfy MDR with a modest clinical file and still fail the BfArM evidence bar. Tibor's caution is direct: the DiGA evidence layer is where founders learn that "CE marked" and "reimbursable" are two different achievements.

By Tibor Zechmeister and Felix Lenhard.

TL;DR

  • DiGA evidence is layered on top of MDR clinical evaluation, not substituted for it.
  • The evidence must cover both quantitative outcomes and qualitative patient-relevant aspects of care in the German statutory health insurance population.
  • BfArM accepts two categories of positive care effects: medical benefit and patient-relevant structural and procedural improvement.
  • A prospective comparative study designed specifically for the DiGA claim is the typical vehicle. Literature review alone is rarely sufficient for the fast-track positive care effects dossier.
  • Tibor and Felix have seen startups assume their MDR clinical evaluation would cover DiGA. It almost never does.

Why this matters

A founder in Cologne sent Tibor a one-line message after a first meeting with an RA consultant: "They told me my MDR clinical evaluation is enough for DiGA. Is that true?" The answer, almost every time, is no.

The MDR clinical evaluation under Article 61 asks whether a device does what it says it does, safely, with an acceptable benefit-risk ratio. For many low-risk digital health products, that question can be answered with a thoughtful literature review, careful equivalence arguments, and a small amount of post-market data. The file can be defensible and fully compliant without a single prospective trial.

DiGA positive care effects ask a different question entirely. Does the product deliver a patient-relevant, measurable benefit to the German healthcare system, compared to a reasonable baseline of care? Answering that question requires a prospective study designed to the research question BfArM will actually ask. A thoughtful literature review, even a good one, will usually not close the gap.

Felix has watched this misunderstanding drain runway from startups that otherwise had credible products. The fix is not more money. The fix is designing the clinical evidence strategy from day one around the higher bar, so that the same underlying study serves both files.

What MDR actually says, and what DiGA adds

MDR Article 61(1) requires that the clinical evaluation demonstrate conformity with the relevant general safety and performance requirements of Annex I, characterise and evaluate undesirable side effects, and assess the acceptability of the benefit-risk ratio. Annex XIV Part A sets out how clinical evaluation is planned, conducted, and documented. For a Class I or Class IIa software medical device with a well-characterised intended purpose, the Article 61 bar is achievable.

The MDR does not ask the manufacturer to prove that the device saves money for a statutory insurer. It does not ask for comparative effectiveness against usual care. It does not ask for health economic analysis, adherence statistics, or structural improvements in the care pathway. Those questions belong to reimbursement law, not to MDR.

DiGA, sitting in the German Digital Healthcare Act and DiGAV, asks exactly those questions.

The result is a layered evidence stack. The MDR file proves the product is a lawful medical device. The DiGA file proves the product deserves statutory reimbursement. The MDR file sits underneath. The DiGA file sits on top. Both are audited separately, by different authorities, against different criteria.

The two categories of positive care effects

DiGA recognises two categories of positive care effects. Both are legitimate. Both must be evidenced.

Medical benefit.

Medical benefit means a measurable improvement in a patient-relevant health outcome attributable to the digital health application. Typical outcomes include improvements in clinical parameters relevant to the target condition, reductions in symptom burden, increases in quality of life measured by validated instruments, or reductions in disease progression rates.

Medical benefit evidence is usually generated through a prospective comparative study, often randomised, in a population that reflects the German statutory health insurance target group. The outcome measure is pre-specified. The comparator is either usual care or a placebo-equivalent. The follow-up period is long enough to capture the effect. The analysis plan is written before data collection begins.

Patient-relevant structural and procedural improvement.

Structural and procedural improvement means a patient-relevant gain in how care is organised, delivered, or experienced. Examples include better adherence to prescribed therapy, faster access to care, improved coordination between providers, reduced duplicate diagnostic testing, better patient self-management, or stronger health literacy.

This category often feels more approachable to startups because the outcome can look less clinical. It is not less rigorous. BfArM still expects prospective evidence, a reasonable comparator, pre-specified outcomes, and a credible causal argument that the application itself, rather than something else, drove the improvement. The methodological standards are structured by DiGAV, and a weak study in this category will fail just as reliably as a weak study in the medical benefit category.

Quantitative and qualitative together

One of the subtle traps in DiGA evidence planning is the assumption that quantitative data is everything and qualitative data is decoration. BfArM explicitly recognises that both matter.

Quantitative evidence is the numerical backbone: effect sizes, confidence intervals, group differences, adherence rates, patient-reported outcome scores. This is what the fast-track application lives or dies on.

Qualitative evidence is the context that makes the numerical results patient-relevant: how users experience the application, whether the benefit is meaningful in daily life, whether the workflow fits routine care, whether the structural improvement is actually felt by patients rather than merely measured by analysts.

Tibor's caution here draws on Section 1 of his risk management experience. Direct benefits are easy to quantify. Indirect benefits, such as reduced waiting time or better triage, are harder, but they are still real benefits and they still need formal documentation. DiGA applicants who present only quantitative outputs without the qualitative framing tend to get pushed back for "not demonstrating patient relevance." Applicants who present only qualitative framing without quantitative outputs tend to get pushed back for "not demonstrating effect."

The applicants who succeed present both, in a structured way that BfArM can evaluate against its own criteria.

A worked example

A Hamburg startup is building a digital programme for type 2 diabetes patients that combines education modules, structured self-monitoring of blood glucose readings entered by the patient, and behavioural coaching prompts. The founder is Class IIa under Rule 11. The CE mark path is clear. The DiGA plan requires an evidence strategy.

Option A, the MDR-only path. The founder assembles a clinical evaluation under MDR Article 61 based on literature review of digital diabetes coaching interventions, a short post-market observation, and equivalence arguments to similar programmes in other countries. For the MDR file, this may well be sufficient. For DiGA, it is not.

Option B, the DiGA-aligned path. The founder designs a prospective randomised controlled trial in German type 2 diabetes patients insured under statutory health insurance. Intervention group uses the programme for twelve months. Control group receives usual care. The primary endpoint is change in HbA1c at twelve months, a well-established clinical measure of diabetes control. Secondary endpoints include self-reported quality of life using a validated instrument, adherence rates, and patient satisfaction. The sample size is powered for the primary endpoint. The analysis plan is locked before recruitment starts.

This study, if well executed, gives the founder enough to make both claims: medical benefit through HbA1c improvement, and structural and procedural improvement through better self-management and adherence. It also serves the MDR clinical evaluation because the data speaks directly to device performance and benefit-risk.

Option C, the wasted path. The founder runs a small single-arm usability study with thirty recruited users, calls it a pilot, and assumes the DiGA fast-track reviewer will accept it as evidence of positive care effects. The application is returned with a request for prospective comparative evidence. The startup is now twelve to eighteen months behind schedule.

The Subtract to Ship playbook

Felix uses a five-step discipline with founders who want to reach the DiGA directory without running three different clinical studies.

Subtract the parallel clinical plans. Remove any plan that runs an MDR clinical evaluation and a separate DiGA study on different timelines. One underlying study, one dataset, two files.

Start from the DiGA question, not the MDR question. Ask first: what is the positive care effect claim? Medical benefit or structural and procedural improvement? What outcome would BfArM want to see? Design the study for that outcome. The MDR file can then be written on top of the same study because the MDR bar, for most Class I and Class IIa software, is lower than the DiGA bar. The reverse sequence almost never works.

Pre-specify everything. Write the hypothesis, the outcome measures, the analysis plan, the comparator, and the sample size calculation before the first patient is enrolled. Post-hoc adjustments to fit a DiGA claim after a negative result are the clearest route to rejection.

Plan both quantitative and qualitative collection from day one. Patient-reported outcomes, structured interviews, and user experience data belong in the trial protocol, not in a marketing afterthought. Budget and consent forms should reflect this from the start.

Engage BfArM early. BfArM offers preliminary scientific advice procedures for DiGA applicants. Going into the trial with an informal read on the agency's expectations reduces the risk of designing for the wrong endpoint.

Subtract to Ship for DiGA evidence means refusing the shortcut of "a small study first, a real study later." The runway does not stretch that far for most startups.

Reality Check

  1. Does the team have a written positive care effects claim, and is it stated as a medical benefit or a patient-relevant structural and procedural improvement?
  2. Is there a pre-specified primary outcome, measured with a validated instrument, in a population that reflects the German statutory health insurance target group?
  3. Has the team designed one clinical study that serves both MDR Article 61 and the DiGA fast-track evidence needs, or are two separate studies planned?
  4. Does the analysis plan include both quantitative outcomes and qualitative patient experience data?
  5. Has the team budgeted for a prospective comparative study, not only a single-arm pilot?
  6. Has the team sought preliminary scientific advice from BfArM, or is the first contact planned to be the fast-track application itself?
  7. Does the post-market surveillance plan under MDR Articles 83 to 86 continue collecting the data BfArM will want to see after listing?
  8. Does the financial plan assume the longer timeline required by prospective clinical evidence, or the shorter timeline of a literature-review-only MDR file?

Frequently Asked Questions

Is MDR clinical evaluation enough for DiGA? Almost never. MDR Article 61 asks whether the device performs safely and whether the benefit-risk ratio is acceptable. DiGA asks whether the device delivers a measurable, patient-relevant benefit to the German statutory healthcare system. The DiGA question is narrower, more specific, and usually requires a prospective comparative study.

What is the difference between medical benefit and structural and procedural improvement? Medical benefit is a measurable improvement in a health outcome, such as reduced symptoms or improved clinical parameters. Structural and procedural improvement is a patient-relevant gain in how care is organised or delivered, such as adherence, coordination, or patient empowerment. Both are accepted categories under DiGA. Both require evidence.

Can a literature review alone establish positive care effects? Rarely. For a novel digital health application targeting a specific German population, literature alone cannot carry the claim. A prospective study designed for the specific application and population is the norm.

Is qualitative evidence accepted by BfArM? Yes, alongside quantitative evidence. The DiGA framework recognises that some benefits, especially indirect ones, cannot be fully captured in numbers alone. Tibor's framing: quantitative establishes the effect, qualitative establishes the relevance.

Can a single trial serve both the MDR clinical evaluation and the DiGA evidence dossier? Yes, and this is the Subtract to Ship recommendation. One study, designed to the higher DiGA bar, satisfies both files because the MDR bar for most eligible classes sits below it. Planning two separate trials wastes time and money.

How long does it take to generate DiGA-level evidence? A prospective study with a meaningful follow-up period typically runs twelve to eighteen months from protocol lock to data lock, with additional time for analysis and reporting. The honest total timeline from product launch to DiGA listing is typically two to three years.

Sources

  1. Regulation (EU) 2017/745 on medical devices, consolidated text. Article 61, Annex XIV Part A.
  2. EN ISO 14155:2020+A11:2024, Clinical investigation of medical devices for human subjects, Good clinical practice.
  3. MDCG 2020-5 (April 2020), Clinical evaluation, equivalence, a guide for manufacturers and notified bodies.
  4. Digitale-Versorgung-Gesetz (DVG) and Digitale Gesundheitsanwendungen-Verordnung (DiGAV), German Federal Ministry of Health.
  5. BfArM guidance on positive care effects for DiGA fast-track applicants.