A continuously learning AI medical device is one whose model parameters change on their own in the field — retraining on live data, adjusting weights to observed inputs, improving over time without a deliberate release event. Under MDR in 2026, there is no clean CE marking pathway for a device that behaves this way without a defined change envelope. MDR is built on the assumption that the device placed on the market has a specified configuration, that the configuration has been assessed, and that significant changes to it are re-assessed before they reach patients. A model that silently retrains between audits produces a stream of configurations that the original conformity assessment never saw. The workable pathway today is a predetermined change control plan: the technical documentation pre-specifies exactly which parameters can change, on what trigger, within what performance bounds, and every update inside that envelope is treated as already covered. Outside the envelope, the update is a significant change and re-assessment follows. The challenge is genuinely unsolved at the frontier, and founders in 2026 have to build around that fact, not around a wish for it to be otherwise.

By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.


TL;DR

  • Continuous learning means the model updates its behaviour after placing on the market, without a deliberate version release. MDR has no clean provision for that.
  • MDR Article 51 and Annex VIII Rule 11 classify the device by the severity of the decisions it supports. Classification is independent of whether the algorithm learns continuously or not.
  • Every update to a certified device has to be classified by significance, and significant changes require notification and re-assessment. A continuously learning model routes around that process unless a defined envelope exists up front.
  • The predetermined change control plan is the working pattern for teams that need post-market learning. The Notified Body assesses the envelope at the initial conformity assessment; updates inside the envelope are already covered.
  • Drift detection is a separate obligation. Even a fully locked model needs it because input distributions shift in the field, and MDR Article 83 requires a PMS system that catches degradation.
  • For 2026, the honest default is: ship with a bounded, Notified-Body-agreed change envelope or ship locked with a disciplined release cadence. Anything looser is not a regulatory strategy — it is a bet on a clarification that has not been issued.
  • The EU AI Act (Regulation (EU) 2024/1689) adds a further horizontal layer on AI systems in safety-critical use, and its interaction with MDR on post-market model updates is still being worked out.

Why continuous learning is the edge case that breaks the frame

Every AI MedTech founder we meet eventually asks the same question in the same order. First: can the model keep learning? Then: what does the Notified Body say? Then: what is everyone else doing? The order matters because the answer to the first question turns out to depend almost entirely on the answer to the second, and the answer to the third is that nobody has yet found a clean way to do what the engineering team originally wanted.

The engineering ambition is reasonable. Medical AI improves with data. A diagnostic model that sees a thousand new cases a month has the potential to outperform one frozen at release. Investors hear that story and like it. Clinical partners hear it and like it. Then the conversation with the Notified Body happens, and the shape of the problem becomes clear. MDR was written to regulate devices that have a configuration. The configuration is specified in the technical documentation, assessed under Article 51 and the relevant annexes, and placed on the market. If the configuration changes materially, the change has to be assessed before the changed device reaches patients. This is not an anti-AI bias. It is the central mechanism by which MDR protects patients from devices that have drifted away from the thing the Notified Body approved.

A continuously learning model makes that mechanism hard to apply. The configuration on any given day is not the configuration that was assessed. Even if the drift is small, the device that the patient sees is not literally the device in the file. The framework needs a way to say "the rules of how this model evolves have themselves been assessed, and updates that stay within those rules are already covered." That framing does not appear verbatim in the MDR text. It has to be constructed, in the technical documentation, through the change control plan, and agreed with the Notified Body. Until it is, the device does not ship.

This post is the sibling of locked versus adaptive AI algorithms under MDR, which frames the two ends of the spectrum. Here the focus is specifically on the unsolved frontier — continuous learning — and what startups should actually do about it in 2026.

Why continuous learning is hard under MDR

MDR does not treat AI as a separate category. A device that happens to use a continuously learning model is still a medical device under Article 2(1), classified under Article 51 and Annex VIII, and subject to the same Annex I general safety and performance requirements as any other software medical device. Rule 11 is the rule that catches most AI decision-support and monitoring software, and it is indifferent to whether the model learns in the field or not — the class is driven by the severity of the decision the information is used to make. MDCG 2019-11 Rev.1 (June 2025) on qualification and classification of software applies without modification. EN ISO 14971:2019+A11:2021 applies to the risk management process.

The hard part is not classification. It is the assumption of configuration stability. MDR's conformity assessment model is built around the idea that a Notified Body assessed a specific thing, that specific thing was placed on the market, and the manufacturer has a change control process to handle the cases where the specific thing needs to change. A model that retrains itself does not fit that pattern cleanly. Each retraining event produces a configuration that the Notified Body did not see. The risk assumptions in the risk file were written for the configuration that was assessed. The clinical evaluation was written for the configuration that was assessed. The PMS plan was written for the configuration that was assessed. Detach the configuration from the file, and the file starts describing a device that no longer exists in the field.

There is a second hard problem, which is that continuously learning models can degrade as well as improve. A retraining loop optimising for a proxy target can move away from clinical benefit even while the internal metrics look healthy. A model that updates on biased field samples can amplify a bias that was absent at certification. None of these failure modes are visible at the moment the adaptive update runs. They appear later, in the field, as harm. MDR's mechanism for catching this kind of silent drift is the conformity assessment and the change control process. A continuously learning model that bypasses both is operating outside the safety mechanism the Regulation relies on.

The change control problem with re-verification

Every change to a certified device has to be classified by significance. Significant changes require notification to the Notified Body and, where they affect safety or performance, re-assessment of the relevant parts of the technical file. Minor changes are documented and proceed. The criteria for significance are not vague — Notified Bodies apply established change notification frameworks to make the call.

Now apply that to a continuously learning model. Is a retraining event a significant change? The honest answer is that you cannot know without evaluating the retrained model against the same reference test battery the original assessment used. If the behaviour has shifted beyond the bounds assessed — in performance, in subgroup fairness, in failure modes — the change is significant. If it has not, it is minor. Either way, the determination has to be made on evidence, not on hope. The practical consequence is that a pipeline capable of retraining the model is only useful, in a regulatory sense, when it is paired with a test infrastructure capable of verifying each retrained version against the same criteria used at certification. At that point the "continuous" learning is effectively a very fast release cadence with automated change management, not a free-running adaptation. And that is the only shape that holds up.

The startup that assumes the retraining pipeline alone solves the regulatory problem has the answer backwards. The retraining pipeline is the engineering capability. The change control and re-verification loop is the regulatory mechanism that makes the capability usable. Skipping the second one does not give you a faster path — it gives you no path at all.

Predetermined change control plans — the concept

The working pattern for teams that genuinely need post-market learning is the predetermined change control plan. It is the mechanism the field has converged on, and it is the only one in 2026 that Notified Bodies can work with in the MDR context.

The idea is straightforward. In the technical documentation submitted for initial conformity assessment, the manufacturer pre-specifies a bounded envelope of changes that are authorised to happen in the field without triggering a new assessment. The envelope is not a general permission to "keep learning." It is a concrete, written-down list of which parameters can change, on what trigger, within what performance bounds, with what revalidation protocol, and with what documentation trail. The Notified Body assesses the envelope itself — the rules of change, not each future change — as part of the initial conformity assessment. Updates that fall inside the envelope are treated as already covered. Updates that fall outside are significant changes and follow the normal notification and re-assessment route.

The mechanism is conceptually similar to predetermined change control plan approaches that have been discussed in international regulatory forums and explored in other jurisdictions. In the EU in 2026, there is no single codified form of this in the MDR text, and the practical contours are being worked out between manufacturers, Notified Bodies, and the Medical Device Coordination Group. What is consistently clear from the Notified Body side is that envelopes have to be specific, defensible, and tied to concrete performance criteria. Envelopes that read "the model may improve as new data arrives" do not survive review. Envelopes that read "calibration parameters A, B, C may be updated monthly, conditional on AUC on the frozen reference set remaining within the band [x, y], subgroup performance not dropping by more than z percentage points, each update logged with version, data snapshot, revalidation result, and RA sign-off" can.

The predetermined boundary approach in practice

Building a defensible envelope is real work. Founders who assume it is a checkbox discover otherwise.

The envelope needs a frozen reference dataset that the retrained model can be evaluated against. The dataset has to be representative of the intended use population, isolated from training data, and stable over time. Building and curating that dataset is a project on its own, and it is the foundation of every update decision for the life of the device. The envelope needs concrete numeric performance bounds — overall and by subgroup — and a pre-specified response if a retraining moves the model outside those bounds. The envelope needs a definition of which parameters are allowed to change and which are not. A model that allows any parameter to move is not an envelope; it is a surrender. The envelope needs a documented update cadence — how often the retraining runs, how the new version gets promoted, who signs off, how the change is logged, how a rollback works if the new version turns out worse.

Engagement with the Notified Body has to happen before the envelope is finalised in the technical documentation. Discovering at audit that the envelope is too loose is an expensive lesson. The Notified Body can and will push back on specific parameter choices, specific bound values, specific trigger conditions. The time to have that conversation is during the pre-audit dialogue, not after the file has been submitted.

The upside, when the envelope is approved, is a device that can genuinely improve in the field through automated updates without each update being a separate regulatory event. The downside is that defining the envelope rigorously is as much work as certifying the initial device, and the envelope is only as useful as the manufacturer's discipline in staying inside it. Violating the envelope by pushing an update that does not meet the criteria is not a minor documentation slip — it is placing an unassessed device on the market, and the consequences scale accordingly.

Drift detection — the other half of the story

Continuous learning gets most of the attention, but drift is the problem that affects every AI medical device, locked or adaptive. A locked model deployed at a new site, into a shifted patient mix, against an updated imaging pipeline, can degrade silently. The model has not moved; its effective performance has. MDR Article 83 requires a PMS system proportionate to the risk class and appropriate for the device. For an AI medical device, appropriate PMS means drift detection with real instrumentation — not complaint handling alone.

The operational content is specific. Monitoring of input distributions against the distributions the model was assessed on. Monitoring of output distributions against expected ranges. Periodic re-evaluation of the deployed model against the frozen reference dataset, on a defined cadence. Clinical outcome tracking where feasibility and ethics allow it. Defined thresholds for escalation, with a response pathway that runs on its own schedule rather than depending on an overworked RA manager to notice something is off.

The connection to the continuous learning question is important. Most of what teams actually want from continuous learning is the ability to respond to drift. Drift detection plus a disciplined re-release cycle delivers most of that benefit without any of the unresolved regulatory risk of true continuous learning. The model is locked between releases, the drift monitor watches for degradation, and when degradation is detected the team does a deliberate retraining, runs the new model through change control, and releases it as a version. The adaptation happens. It just happens through the change control mechanism rather than around it. For most products in 2026, that is the right answer.

What startups should actually do in 2026

Concrete recommendations for teams working on AI medical devices right now.

Do not ship a truly free-running continuously learning model in the EU. There is no clean CE marking pathway for it in 2026. Any team betting on one is betting on guidance that has not been issued. The cost of that bet if it does not pay off is the entire project.

If you need post-market learning, scope it tightly and build the envelope deliberately. Start with the smallest set of parameters that can change, the tightest trigger conditions, and the strictest performance guard rails. Work backwards from the concrete business case — which specific improvement in the field justifies the regulatory overhead — and include only the adaptation that delivers that case. Vague ambitions for "the model to keep getting better" do not survive Notified Body review. Specific envelopes with concrete numbers do.

Engage the Notified Body early on the envelope. Do not finalise the technical documentation and then find out the envelope is unacceptable. The pre-audit dialogue is the cheapest possible place to have the disagreement.

Build drift detection into the product from day one. Input monitoring, output monitoring, frozen reference set evaluation, defined thresholds, alerting. These are engineering features that belong in the product architecture, not a binder. Once they exist, the PMS plan writes itself against real infrastructure and the drift-plus-release pattern becomes a viable alternative to true continuous learning.

Consider the locked-with-fast-releases alternative honestly. A disciplined release cadence — quarterly, monthly, even weekly — with a lean change management process delivers most of the practical benefit of continuous learning for most products. The MLOps sophistication that would have powered free-running adaptation still earns its keep by making each deliberate release fast and cheap. The difference is that every version reaching patients has been assessed, and the chain of evidence is intact.

Track the EU AI Act layer separately. The AI Act (Regulation (EU) 2024/1689) adds horizontal obligations on AI systems in safety-critical use — on data governance, documentation, transparency, human oversight, and post-market monitoring. The interaction with MDR on the specific question of post-market model updates is still being clarified by the Commission and the Medical Device Coordination Group. Plan for both Regulations and expect to adjust as the operational interface settles.

Honest uncertainty

Here is what is settled in 2026. MDR applies in full to AI medical devices, including those with continuously learning components. Article 51 and Annex VIII Rule 11 drive the classification. Annex I Section 17 software requirements and EN ISO 14971:2019+A11:2021 risk management apply. Significant changes to certified devices require notification and re-assessment. Drift detection is an Article 83 PMS obligation for AI devices. Predetermined change control plans are the working pattern for bounded post-market learning.

Here is what is not settled. The precise operational form that predetermined change control plans should take in the MDR context — there is no single codified template yet. Notified Body practice on the assessment of change envelopes is still converging. The detailed interface between MDR conformity assessment and the AI Act obligations on post-market model updates is still being worked out. Guidance specifically on continuously learning AI medical devices, from either the MDCG or the Commission, is not yet in place at the level of detail the field will eventually need.

This is the frontier of the Regulation. Founders working at this frontier have to build for the settled part with full discipline and track the unsettled part honestly, expecting to update their approach as clarification lands. Anyone claiming more certainty than this — that they have the definitive solution for continuously learning AI under MDR in 2026 — is overstating what is actually known. The Subtract to Ship move in that situation is to cut the ambition that depends on the unsettled clarification and ship the version that depends only on the settled ground.

The Subtract to Ship angle

Applied to this question, the Subtract to Ship framework produces a clear subtraction: cut the continuous learning ambition from the initial CE marking scope unless there is a specific, articulable clinical benefit that cannot be delivered any other way. For most AI MedTech products, a locked model with a disciplined release cadence and active drift detection delivers the real clinical value without the unresolved regulatory risk. The continuous learning capability is one of the most expensive features a team can keep in scope. Removing it buys time, removes Notified Body friction, and lets the team concentrate on the parts of the product that actually drive outcomes.

Where the business case genuinely depends on bounded post-market adaptation — and in some products it does — the subtraction is not "cut all learning" but "cut everything except the specific, defined, justified envelope." Keep the parameters that matter. Drop the ones that do not. Pre-specify the bounds. Engage the Notified Body. Build the frozen reference set. Wire the drift detection. That is what a defensible path to continuous adaptation looks like in 2026, and it is much closer to a disciplined release engineering practice than to the autonomous self-improving model the engineering team first imagined.

Reality Check — Where do you stand?

  1. Can you state precisely which parameters of your model are allowed to change after release, and which are not, in writing?
  2. For every parameter that can change, can you name the trigger, the performance bound, and the revalidation step that applies?
  3. Do you have a frozen reference dataset that every candidate update is evaluated against before it reaches patients?
  4. If a retraining event happened tomorrow, would the new version flow through a documented change control process with sign-offs, or would it just deploy?
  5. Have you engaged your Notified Body on the shape of your change envelope before finalising the technical documentation?
  6. Does your PMS plan include active drift detection with defined metrics, thresholds, and a response pathway, independent of whether the model itself is updating?
  7. Can you defend the continuous learning capability in your product against the alternative of locked-plus-fast-releases, on clinical benefit terms and not just engineering preference?
  8. Do you know which of your assumptions about continuous learning depend on guidance that has not yet been issued?

Frequently Asked Questions

Can I ship a truly continuously learning AI medical device in the EU in 2026? Not cleanly. MDR is built around a defined device configuration at the point of placing on the market, and a fully autonomous continuously-learning algorithm without a defined change envelope does not fit that framework. The practical pathways are a locked model with a disciplined release cadence or a predetermined change control plan that pre-specifies which updates can happen inside what bounds. Anything looser is not a regulatory strategy in 2026.

What is a predetermined change control plan in the MDR context? It is a document submitted as part of the technical documentation for the initial conformity assessment that pre-specifies a bounded envelope of changes the manufacturer is authorised to make in the field without triggering a new assessment. The plan defines which parameters can change, on what trigger, within what performance bounds, with what revalidation protocol, and with what documentation trail. The Notified Body assesses the envelope itself up front, and updates inside the envelope are treated as already covered. There is no single codified template in the MDR text in 2026; the practical contours are being worked out between manufacturers, Notified Bodies, and the Medical Device Coordination Group.

Does classification under Rule 11 depend on whether the algorithm learns continuously? No. Annex VIII Rule 11 classifies software by the severity of the decisions the information is used to make. A Class IIb decision-support tool is Class IIb whether the model behind it is locked or updates daily. What changes with continuous learning is the change control and PMS burden, not the class.

Is drift detection only needed for continuously learning models? No. Even a fully locked model can see its effective performance degrade when the input distribution in the field shifts — new imaging hardware, shifting patient mix, changing clinical guidelines. MDR Article 83 requires a PMS system proportionate to the risk class, and for AI devices that means active drift detection with defined metrics, thresholds, and a response pathway.

If my retraining pipeline is fast, does that give me regulatory flexibility? Not on its own. Fast retraining infrastructure is an engineering capability. It becomes a regulatory asset only when it is wired into a change control process that evaluates every candidate model against a frozen reference battery before it reaches patients. The speed of the pipeline is a release cadence advantage once the process is in place, not a substitute for the process.

How does the EU AI Act affect the continuous learning question? The AI Act layers horizontal obligations on AI systems in safety-critical use, including on data governance, documentation, transparency, human oversight, and post-market monitoring. The interaction with MDR on the specific question of post-market model updates is still being clarified by the Commission and the Medical Device Coordination Group in 2026. Founders should plan for both Regulations and expect the operational interface to be refined as guidance lands.

What should I do if my investor or engineering team insists on free-running continuous learning? Walk them through the Regulation, the Notified Body position, and the risk that the project is unshipable in the EU on those terms. Propose the bounded-envelope alternative or the locked-plus-fast-releases alternative with concrete business-case reasoning. The ambition is not the problem — the refusal to scope it is. A team that cannot articulate which specific parameters need to adapt, within which bounds, is not ready to defend a continuous learning capability to a Notified Body, and that is a leading indicator of the project getting stuck.

Sources

  1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices — Article 2(1) (definition of medical device), Article 51 (classification), Article 83 (post-market surveillance system), Annex VIII Rule 11 (classification of software). Official Journal L 117, 5.5.2017.
  2. MDCG 2019-11 Rev.1 — Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 — MDR and Regulation (EU) 2017/746 — IVDR, October 2019, Revision 1 June 2025.
  3. EN ISO 14971:2019 + A11:2021 — Medical devices — Application of risk management to medical devices.
  4. Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Referenced for the general framing of the horizontal AI obligations layered on top of MDR; specific article references are intentionally omitted where they are not in our verified ground truth catalog. Founders should consult the official text on EUR-Lex.

This post is part of the AI, Machine Learning and Algorithmic Devices category in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. Continuous learning AI is the question most AI MedTech founders want a clean answer to and the one where the gap between engineering ambition and regulatory reality is the widest. If the shape of a defensible change envelope for your specific product is not obvious after reading this post, that is expected — the envelope is a bespoke piece of work, and it is exactly the kind of decision where a sparring partner who has walked other AI MedTech teams through the same conversation with a Notified Body earns their keep.