Value-based pricing ties what a payer pays for your device to the clinical or economic outcome it delivers. Done well, it unlocks premium prices for devices that genuinely work. Done badly, it transfers clinical risk from the hospital to your balance sheet — and your MDR post-market surveillance system becomes the contract's truth machine.

By Tibor Zechmeister and Felix Lenhard.

TL;DR

  • Value-based pricing means payment depends on an agreed outcome, not just on delivery of a unit.
  • Outcome-based contracts shift financial risk from the payer to the manufacturer, so the evidence backbone must be airtight.
  • MDR Articles 83–86 already require you to collect post-market data continuously; a value-based contract uses the same data for commercial purposes.
  • PROMs (Patient-Reported Outcome Measures) and real-world outcomes are the two data streams that make or break a deal.
  • Negotiate outcome definitions, measurement windows, attribution rules, and dispute resolution in writing before signing.
  • Most early-stage startups should not sign a full risk-sharing contract on their first product — use a hybrid model instead.

Why this matters for MedTech founders

A hospital director in Munich told one of our founders last year: "I do not care what your device costs. I care what it costs me per patient who actually gets better." That sentence is the whole business case for value-based pricing in three seconds.

European payers — statutory insurers, HTA bodies, hospital procurement — are tired of paying list prices for devices whose real-world benefit looks nothing like the pivotal trial. They are increasingly willing to pay more for devices that demonstrably work and less (or nothing) for devices that do not. That is both an opportunity and a trap. The opportunity is that a genuinely effective device can command a price that reflects its value rather than its bill of materials. The trap is that you are now underwriting clinical outcomes — which means your post-market surveillance system is no longer a regulatory obligation, it is the backbone of your P&L.

This post walks through how outcome-based contracting works for MedTech, what evidence payers actually demand, where the risks sit for a startup manufacturer, and how to negotiate a contract that does not blow up your cash flow. The regulatory spine is MDR Part III's post-market surveillance obligations, which already require you to collect most of the data your commercial team needs.

What MDR actually says about the data backbone

Value-based pricing is a commercial arrangement, not a regulatory one. The MDR does not mandate, define, or regulate pricing. But every outcome-based contract needs a continuous, credible stream of real-world data, and that is exactly what MDR Articles 83 through 86 require.

MDR Article 83 obliges every manufacturer to "plan, establish, document, implement, maintain and update a post-market surveillance system" proportionate to the risk class and appropriate for the type of device. Article 84 requires the PMS plan itself, with content specified in Annex III. Articles 85 and 86 require the PMS report (Class I) or the Periodic Safety Update Report (Class IIa, IIb, III). On top of that, Annex XIV Part B requires Post-Market Clinical Follow-Up (PMCF) as a continuous process, unless you can justify otherwise, and the PMCF plan must specify the methods and procedures for collecting clinical data.

In practice this means a compliant MDR manufacturer is already running — or should be running — the infrastructure to capture real-world performance, safety, and usage signals. A value-based contract simply reuses that infrastructure for a commercial purpose. If your PMS system is a spreadsheet somebody updates once a quarter, you are nowhere near ready to sign an outcome-based deal. If it is a structured pipeline with defined data sources, clean patient-level records, and validated endpoints, you have most of what a payer will demand.

One caution: do not let commercial needs distort your regulatory PMS. The PMS plan exists to protect patients, not to generate marketing claims. If you want additional outcome data for a contract, add it as a parallel data stream — often as an extension of the PMCF plan — with clear separation in how the data is used.

The evidence payers actually want

A payer evaluating an outcome-based offer will look for four things.

First, a clinically meaningful outcome. "The device worked as intended" is not an outcome. "HbA1c reduction of at least 0.5 percentage points at 6 months" is. For implants, it might be revision-free survival at 2 years. For a wound-care device, it might be time to complete healing. For digital therapeutics, it is very often a PROM score — PHQ-9 for depression, Oswestry for back pain, EQ-5D for general health utility. PROMs are now the currency of value-based care in much of Europe, and if your clinical evaluation does not include a validated PROM, you are negotiating with one hand tied.

Second, attributability. The payer will ask: how do we know the outcome is because of your device and not because of the drug the patient is also taking, or the physiotherapist, or regression to the mean? You need a measurement design that isolates your contribution — ideally a matched comparator arm, a pre/post design with a credible baseline, or a registry benchmark.

Third, data integrity. Who collects the data? Who audits it? What happens when a patient drops out? A payer who does not trust the data will not sign. Hospitals have been burned by manufacturer-run "real-world studies" that mysteriously produced perfect results, and procurement teams now read protocols like auditors.

Fourth, a defined measurement window. "Outcomes over the lifetime of the device" is not a window. Six months, twelve months, two years — with a pre-specified visit schedule and a pre-specified primary endpoint. Anything vaguer creates dispute.

The good news: for most Class IIa, IIb, and III devices, the MDR already requires PMCF that captures much of this. The gap you need to close is usually (a) adding validated PROMs if they are not already in your CER, and (b) building the attribution logic into your study design from the start.

A worked example: a digital musculoskeletal startup

Consider a Class IIa software-as-a-medical-device startup — a digital MSK therapy app for chronic lower back pain, CE-marked, reimbursed in Germany under DiGA and negotiating with a French mutuelle for a pilot.

List price proposal: €450 per 90-day course. The mutuelle declines. Instead, the procurement team offers a value-based contract with the following structure.

  • Primary outcome: Oswestry Disability Index improvement of at least 10 points at 90 days, measured at baseline and day 90 via the app.
  • Payment structure: €150 upfront per enrolled patient, plus €350 outcome payment if the primary endpoint is met.
  • Measurement window: 90 days from activation, with a 14-day grace period for the day-90 assessment.
  • Attribution: patients are their own controls (pre/post on ODI), with a non-responder rate benchmarked against the published literature ceiling of roughly 40 percent.
  • Data: collected in-app, exported monthly to a shared dashboard, independently reviewed quarterly by a clinical auditor appointed by the mutuelle.
  • Cap: the mutuelle caps total exposure at 500 patients in year one.

The startup runs the numbers. If 60 percent of patients hit the endpoint, the blended realised price is 150 + (0.60 × 350) = €360 per patient — below list but acceptable. If only 40 percent hit, realised price is €290 — below cost of service. If 75 percent hit, realised price is €412 — close to list.

The startup signs, but only after three changes: (a) the endpoint is defined as intention-to-treat with patients lost to follow-up counted as non-responders unless non-follow-up is due to documented clinical improvement, (b) a minimum engagement threshold (at least 6 of 12 prescribed weekly sessions) is required for a patient to count toward the denominator at all, and (c) a quarterly review allows the price to re-open if the observed non-response rate exceeds a pre-specified ceiling caused by the mutuelle's patient selection, not the device.

Those three clauses are the difference between a contract that works and a contract that bankrupts you on the second cohort. The PMS system feeds all of it — and because the data stream was designed for the PMCF plan first, the additional effort to serve the contract is modest.

The Subtract to Ship playbook for value-based pricing

Subtract to Ship is blunt: most seed-stage startups should not sign a pure risk-sharing agreement as their first commercial contract. The variance is too high, the data infrastructure too new, and one bad cohort can kill your runway. Instead:

  1. Start with a hybrid model. Charge a list price with a modest outcome bonus (or a modest outcome rebate). Cap your downside. Learn how real patients perform before you bet the company on it.

  2. Design the evidence stack once, use it twice. Build PROMs and real-world endpoints into your PMCF plan from day one. The same data feeds your PSUR, your CER update, and your commercial contract. Do not build parallel systems.

  3. Pre-specify everything. Endpoint, window, analysis population, handling of dropouts, handling of non-compliant patients, audit rights, dispute resolution, price floor, price ceiling, exit clauses. The contract should be longer than the pitch deck.

  4. Build attribution into the trial, not the sales call. If your pivotal evidence has no credible comparator and no validated PROM, you cannot negotiate value-based pricing from a position of strength. Subtract the shiny features; add the endpoint that payers actually care about.

  5. Size the risk honestly. Run three scenarios — 50th, 25th, and 10th percentile of the outcome rate observed in your clinical evaluation. If the 10th percentile scenario breaks your unit economics, either cap the contract or renegotiate the structure.

  6. Treat the first contract as a template. Every term you concede in contract one will show up in contract two. If you give one payer a right to walk at month 6 with no penalty, every subsequent payer will ask for it.

The devices that win value-based contracts are not always the most technologically advanced. They are the ones whose manufacturers genuinely understand their real-world performance distribution and can price the risk.

Reality Check

  1. Does your clinical evaluation include at least one validated PROM relevant to your intended purpose?
  2. Does your PMCF plan specify how you will collect real-world outcomes at defined intervals, not just safety signals?
  3. Can your PMS system produce patient-level outcome data within 30 days of a request?
  4. Do you know the response rate distribution for your device across patient subgroups — or only the headline mean?
  5. If a payer offered you 100 percent outcome-contingent pricing tomorrow, could you model your cash flow at the 10th percentile of historical performance?
  6. Is your PMCF plan approved by your Notified Body, and does adding commercial outcome data require a change notification?
  7. Do you have legal review capacity in-house or on retainer for contracts that run 30+ pages of clinical and commercial terms?
  8. Have you separated the clinical governance of your PMS from the commercial use of the data, to protect against conflict-of-interest accusations?

Frequently Asked Questions

Is value-based pricing allowed under MDR? MDR does not regulate pricing. It regulates safety, performance, and post-market data. You can sign any commercial arrangement you like, as long as your underlying regulatory obligations — PMS, PMCF, vigilance — are met independently of the contract.

Do I need a separate clinical study for a value-based contract? Usually not. A well-designed PMCF plan under Annex XIV Part B can generate most of the data a payer needs. Adding validated PROMs and attribution logic is cheaper than running a parallel study.

What happens if we cannot meet the outcome? That depends entirely on contract structure. In a pure risk-sharing agreement, you refund or forfeit payment. In a hybrid, you lose the bonus but keep the base. This is why the downside clauses matter more than the upside clauses.

Can I use PMS data that was originally collected for regulatory purposes in a commercial contract? Yes, but declare the dual use in your PMS plan, obtain patient consent that covers both purposes, and keep the regulatory analysis independent of the commercial analysis to avoid conflict of interest.

Are PROMs really enough evidence for a payer? For many conditions — pain, mental health, functional status — validated PROMs are the gold standard because the patient's experience is the outcome. For implants and diagnostics, payers usually want a clinical endpoint as well.

How do I price the risk? Model three scenarios using the outcome distribution observed in your clinical evaluation: expected case, downside case, and stress case. Your realised price at the downside case should still cover cost of service. If it does not, the contract is too aggressive for where your company is.

Sources

  1. Regulation (EU) 2017/745 on medical devices, consolidated text. Articles 83, 84, 85, 86; Annex III; Annex XIV Part B.
  2. MDCG 2025-10 (December 2025) — Post-market surveillance.
  3. EN ISO 13485:2016+A11:2021 — Quality management systems for medical devices.