An HTA submission is the formal evidence dossier a manufacturer files with a Health Technology Assessment body so that a national or regional payer can decide whether a medical device deserves public coverage. The dossier has to answer a harder question than the MDR: not "is the device safe and does it perform as intended?" but "does it produce enough clinical and economic value, relative to the current standard of care, to justify public spending?" Getting the answer right means designing the evidence years before the submission is filed. Getting it wrong after CE marking usually means starting over.

By Felix Lenhard and Tibor Zechmeister. Last updated 10 April 2026.


TL;DR

  • An HTA submission is a structured evidence dossier submitted to a Health Technology Assessment body, not a regulatory filing. It addresses clinical value, economic value, and budget impact relative to an explicit comparator.
  • HTA bodies sit between the manufacturer and the payer in most major European markets. Their assessment is typically the practical gate to statutory coverage.
  • HTA evidence standards are higher than MDR standards because HTA answers a different question. MDR certification is necessary but not sufficient for a positive HTA outcome.
  • The biggest single determinant of an HTA submission's outcome is the clinical development plan written years earlier. The comparator, the endpoints, and the patient population chosen then decide whether the dossier can answer the HTA question at all.
  • A submission built on post-hoc literature, wishful equivalence, and engineering-flavoured endpoints is the most common failure mode. Founders who only discover the HTA question after CE marking typically pay for a second, larger study.

Why HTA submissions decide revenue

A certified medical device in Europe has permission to be placed on the market. It does not yet have a path to revenue. In most national health systems, the path to revenue runs through a coverage decision made by a payer, and the payer in turn relies on an assessment produced by a Health Technology Assessment body. That assessment is shaped by the submission the manufacturer files. The submission is where the company tells the story of what the device is worth, in the specific language the HTA body expects.

Most MedTech founders meet the HTA question too late. They finish MDR certification, turn toward reimbursement, and discover that the clinical evidence collected for the CE mark answers a question no HTA body is asking. The MDR asked about safety and performance against the manufacturer's stated intended purpose. The HTA body asks about comparative effectiveness against the current standard of care, measured on outcomes that matter to patients and payers, priced against a defensible health economic model. Different question, different evidence, different dossier. The gap between the two is where most bad HTA outcomes are manufactured.

This post walks through what an HTA submission is at general framing — what it evaluates, what goes in the dossier, how the clinical layer differs from the economic layer, what the review process looks like, and where submissions most often fail. The details of any specific country's HTA body belong in the country posts (748, 749, 750, 751, 753). What follows is the shared logic that holds across them.

What an HTA body actually evaluates

An HTA body is an independent or quasi-independent technical organisation that assesses the value of a health technology — a drug, a device, a diagnostic, a procedure — so that a payer can make a rational coverage decision. The assessment is structured around a defined set of questions that differ slightly in emphasis from country to country, but that share a common core.

The core questions are four. What is the unmet clinical need the technology addresses? What is the comparative clinical benefit of the technology against the current standard of care, measured on endpoints that matter to patients? What is the economic consequence of adopting the technology — cost per outcome, cost per quality-adjusted life year, or budget impact across the eligible population? And what is the overall added value, integrating clinical and economic dimensions, given the alternatives the payer could fund instead?

A submission that answers those four questions clearly has a chance. A submission that treats them as formalities to get past on the way to an expected "yes" does not. HTA bodies are structurally sceptical — their function is to stop weak evidence from turning into public spending. Founders who treat HTA as a paperwork exercise underestimate both the rigour of the process and the consequences of a negative verdict.

The evidence hierarchy

HTA bodies work with an evidence hierarchy that determines how much weight a given piece of clinical data carries. The hierarchy is not rigid — a well-designed observational study can outrank a badly designed randomised trial — but the general order is widely shared across European HTA bodies.

At the top sit randomised controlled trials against an active comparator representing the current standard of care, with pre-specified clinical endpoints, adequate sample size, and follow-up long enough to capture the outcomes that matter. Below them sit randomised trials against placebo or usual care, single-arm trials with historical controls, high-quality prospective registries, retrospective observational studies, and expert consensus, roughly in that order. Literature reviews and equivalence claims sit near the bottom when used as standalone evidence, and higher when used to contextualise a primary study.

A device that brings an RCT against standard of care to its HTA submission starts from a defensible position. A device that brings only the clinical evidence assembled for the CE mark — which under MDR Article 61 and Annex XIV may legitimately be built on literature, equivalence, or a smaller clinical investigation — usually starts from a weaker one. The MDR evidence base is necessary. It is often not sufficient.

Clinical evidence versus economic evidence

An HTA dossier has two analytically distinct layers. The clinical layer establishes that the device produces a measurable benefit compared to the current standard of care. The economic layer translates that benefit into monetary terms the payer can compare against its alternatives. Both layers have to work. A device with a strong clinical story and a weak economic model can still fail. A device with an impressive economic model built on weak clinical inputs fails reliably.

The clinical layer is built around comparative effectiveness. The comparator must be the treatment the target patient population is receiving today — not a historical standard, not a theoretical ideal, not a convenient straw man. The endpoints must be the outcomes patients and payers care about: mortality, morbidity, hospitalisation, quality of life, functional status, resource use. Engineering-flavoured endpoints — accuracy, precision, device performance — may satisfy the MDR but rarely move an HTA verdict by themselves. The statistical design must be adequate for the effect size the device can realistically achieve, which means real power calculations done before the study runs, not after.

The economic layer is built on top of the clinical layer. It converts clinical effect into cost terms using a model — cost-effectiveness, cost-utility, or budget impact, depending on the country's conventions — populated with transparent, defensible inputs. The model's sensitivity to its key assumptions has to be tested openly. HTA bodies are experienced readers of these models. They see the trick where an unrealistic assumption turns a modest clinical effect into a dramatic economic result, and they downgrade the submission accordingly.

Patient-level outcomes

The endpoint question is where many device studies go wrong years before the HTA dossier is assembled. A device measured on its own performance metrics — signal quality, measurement accuracy, uptime — is not measured on anything an HTA body can use. The endpoint that matters is what happens to the patient when the device is used.

Patient-level outcomes translate the device's effect into consequences the health system actually cares about. Mortality is the clearest and, for most devices, the least achievable. Morbidity — complication rates, adverse events avoided, disease progression prevented — is more often within reach. Quality of life, captured through validated instruments appropriate to the disease area, is central for chronic conditions and for procedures that change how patients live rather than how long. Functional status matters for anything touching mobility, independence, or daily activity. Resource use — hospital days, readmissions, outpatient visits, downstream procedures — sits at the intersection of clinical and economic evidence and is often the most convincing single metric for a device with a strong operational story.

Choosing the endpoints during the clinical development plan, in consultation with likely HTA comparators in the priority markets, is the single highest-leverage act of HTA preparation. It costs almost nothing during planning and cannot be retrofitted cheaply afterwards.

The submission process at general level

The HTA submission process looks different in each country, but at general framing it follows a recognisable shape. The manufacturer prepares a structured dossier following the HTA body's template or methodological guide. The dossier is filed with the relevant body. A scientific committee or review panel assesses the dossier, usually with one or more rounds of questions back to the manufacturer. An assessment report is produced, followed by a recommendation to the payer. The payer makes the coverage decision, which may include negotiated conditions on price, patient population, or data collection.

At each step, the manufacturer has less control than they expect. The template dictates structure. The review panel interprets the evidence through its own methodological lens. Questions from the panel can expose gaps that were not visible in the original dossier, and answers have to hold up under scrutiny because a weak answer locks in a weaker recommendation. Negotiation at the payer step is real but constrained by what the HTA recommendation says is defensible.

The practical implication for founders is that the submission is not a document you write at the end. It is a document whose content was decided years earlier by the clinical development plan, the comparator choice, the endpoint selection, and the study design. The writing at the end is the visible tip of the work. The invisible part is everything that made the evidence available to write about in the first place.

The review timeline

HTA reviews at general framing take months, not weeks, and the calendar is structured around formal submission windows, question-and-answer cycles, and committee meetings that do not move for any single applicant. A realistic planning horizon from dossier submission to final recommendation is several months in a best case and considerably longer when the evidence base raises questions or when the committee requests additional analyses.

The timeline interacts badly with startup cash runways. A company that begins assembling its HTA dossier the quarter after CE marking is already behind. By the time the review concludes, the coverage decision is negotiated, and the payment rate is set, the original runway projection has been overrun. Founders who plan for HTA from the clinical development plan onwards avoid this; founders who treat HTA as a post-certification task discover it the expensive way.

The sequence across multiple countries compounds the problem. Because reimbursement is a national competence, an HTA submission in one country does not substitute for one in another. Documentation overlap between countries is real — the clinical evidence is largely transferable, the economic model has to be re-parameterised for local costs and comparators — but the review clocks run independently, and the cumulative multi-country timeline is the gating factor for pan-European revenue (post 748).

Common mistakes

  • Treating the MDR clinical evaluation dossier as the HTA evidence base. The MDR answers a different question, and an HTA body that is asked to evaluate MDR-grade evidence will produce an MDR-grade outcome — sufficient for certification, insufficient for coverage.
  • Choosing the comparator for the clinical study based on what the team wanted to prove rather than on the current standard of care in the target market. A study against a weak comparator looks impressive and convinces no HTA body.
  • Picking endpoints for engineering reasons — measurement accuracy, device performance — rather than for patient or payer reasons. Endpoints that do not translate into clinical or economic value leave the HTA body nothing to assess.
  • Building a health economic model on optimistic assumptions that cannot survive sensitivity analysis. Experienced HTA readers find the load-bearing assumption within minutes and push on it until the result collapses.
  • Starting the HTA submission after CE marking and discovering only then that the study design, the patient population, or the follow-up duration will not answer the HTA question. The only remedy at that point is a second, larger, comparative study — expensive and slow.
  • Submitting the same dossier to every country without adapting it to local comparators, cost structures, and methodological conventions. Every HTA body has a house style, and dossiers that ignore it lose credibility before the scientific review begins.
  • Assuming that a positive HTA assessment means reimbursement. It usually means a recommendation to the payer, which is a necessary condition for coverage, not a guarantee of it.

The Subtract to Ship angle

Subtract to Ship applied to HTA submissions means refusing to assemble evidence that does not serve the specific question the HTA body will ask. Every clinical endpoint, every economic input, every comparative analysis has to trace back to a decision the reviewer will have to make. Evidence that does not connect to that decision is waste, and waste in a constrained startup kills the company that assembled it (post 065).

It also means deciding, early, which HTA bodies matter first. A clinical development plan designed around the priority markets' HTA expectations produces a dossier usable in those markets. A plan that tries to satisfy every possible HTA body simultaneously usually fails to satisfy any of them well. Pick the two or three markets that define success, read their methodological guides, design the evidence to answer their questions, and let the secondary markets come later.

The hardest subtraction is removing the assumption that HTA is a downstream formality. It is not. It is the gate the device has to pass to generate revenue, and the submission is the visible output of years of upstream decisions. Treating those upstream decisions as if HTA depended on them produces dossiers that clear the bar. Treating HTA as an administrative step at the end produces submissions that do not.

Reality Check — Where do you stand?

  1. Do you know, for each of your priority markets, which HTA body will assess your device, and have you read its current methodological guide end to end?
  2. Has your clinical development plan been designed to answer the comparative effectiveness question against the current standard of care, or only the MDR safety and performance question?
  3. Are your primary endpoints patient-level outcomes — mortality, morbidity, quality of life, resource use — or engineering performance metrics dressed up in clinical language?
  4. Do you have a health economic model whose load-bearing assumptions can survive open sensitivity analysis without the result collapsing?
  5. Have you budgeted realistic review timelines in your financial model, including question-and-answer cycles and committee schedules?
  6. Are you prepared to adapt the dossier to each HTA body's house style and local parameterisation, rather than submitting a single template everywhere?
  7. If the HTA assessment came back tomorrow with a conditional recommendation requiring additional data collection, is your company positioned to deliver that data or would the conditional outcome become a permanent block?

Frequently Asked Questions

What is an HTA submission for a medical device? It is a structured evidence dossier submitted by a manufacturer to a Health Technology Assessment body so that a payer can decide whether to cover the device from public funds. The dossier covers clinical value, economic value, and budget impact relative to the current standard of care.

Is an HTA submission the same as a regulatory submission under MDR? No. MDR certification addresses whether the device is safe and performs as intended under Regulation (EU) 2017/745. An HTA submission addresses whether the device produces enough clinical and economic value to justify public spending. Different question, different audience, different evidence standards.

Can the clinical evidence used for CE marking also support an HTA submission? Sometimes, and rarely in full. MDR clinical evidence under Article 61 and Annex XIV can draw on literature, equivalence, or clinical investigation. HTA bodies generally expect comparative effectiveness data against the current standard of care, measured on patient-level outcomes. The two dossiers overlap but are not interchangeable.

How long does an HTA review take at general framing? At general framing, months rather than weeks, with variation across countries and technology types. Formal review windows, question-and-answer cycles, and committee meeting schedules all contribute. The total wall-clock time from submission to final recommendation frequently exceeds founders' planning assumptions.

What happens if an HTA submission is rejected or receives a negative assessment? A negative assessment typically blocks statutory coverage in that country until stronger evidence is produced. The practical remedy is usually a second, larger comparative study, which is expensive, slow, and sometimes not feasible because the device is already on the market. Preventing the bad outcome by designing the evidence correctly from the start is almost always cheaper than recovering from it.

Sources

  1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, Article 32 (Summary of safety and clinical performance). Official Journal L 117, 5.5.2017.
  2. Treaty on the Functioning of the European Union, Article 168 — the legal basis under which reimbursement and Health Technology Assessment decisions remain a competence of the member states.
  3. National Health Technology Assessment bodies across Europe — their methodological guides, submission templates, and evidence frameworks are the authoritative reference for any specific country submission and must be consulted in their current version before a dossier is designed.
  4. Published methodological frameworks on comparative effectiveness research and health economic modelling maintained by national HTA bodies and by international HTA networks.

Current-process verification note: HTA methodologies, submission templates, review timelines, and evidence thresholds evolve continuously at the national level. Any operational plan based on this general framing must verify the current state of the target country's HTA body, submission requirements, and review process directly against the national authorities before clinical development decisions are locked in.


This post is part of the Funding, Business Models & Reimbursement series in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. The HTA submission is where the years of upstream clinical development decisions become visible, and the companies that understand this early are the ones whose certified devices eventually turn into reimbursed revenue.