A locked AI algorithm behaves the same way after it is placed on the market as it did on the day of certification — it only changes through controlled, version-managed releases. An adaptive AI algorithm changes its own behaviour in the field, either by retraining on new data or by updating its parameters in response to live inputs. Under MDR in 2026, locked algorithms ship cleanly: the Regulation is built around a defined device configuration at the point of placing on the market, and locked models fit that assumption. Adaptive algorithms do not yet have a clean CE marking pathway in the EU. The practical route for any team that wants post-market learning is a predetermined change control plan — a document that pre-authorises a bounded envelope of changes, agreed with the Notified Body in advance, so that each update inside the envelope is already covered by the original conformity assessment.

By Tibor Zechmeister and Felix Lenhard. Last updated 10 April 2026.


TL;DR

  • A locked algorithm does not change its behaviour after release except through defined, version-controlled updates that go through change management. An adaptive algorithm changes its behaviour on its own, between releases.
  • MDR is built on the assumption that the device certified on day one is the device on the market on day two. Locked algorithms match that assumption without friction. Adaptive algorithms do not.
  • Annex VIII Rule 11 classification, Annex I Section 17 software requirements, EN 62304:2006+A1:2015 lifecycle discipline, and EN ISO 14971:2019+A11:2021 risk management apply identically to both — the difference shows up in change control and post-market surveillance, not in the initial classification.
  • Significant changes to a certified device trigger re-assessment. An adaptive model that retrains silently in the field is, in the MDR sense, producing a stream of undocumented changes of unknown regulatory significance. No Notified Body in 2026 is going to sign off on that without a defined envelope.
  • The working pathway for teams that need some post-market learning is a predetermined change control plan: the technical documentation pre-specifies which model parameters can change, on what trigger, within what performance bounds, with what revalidation, and each update inside that envelope is treated as already covered.
  • Drift detection is the other half of the story. Even a locked model can degrade when the input distribution in the field shifts. PMS under MDR Article 83 has to catch that, whether the algorithm itself changes or not.
  • For 2026, the default decision for startups is: ship locked, plan an explicit change control envelope for the updates you actually need, and build drift monitoring into the product from day one.

Why this question matters more than any other AI MedTech question

Almost every AI MedTech founder we meet asks the same question early. Can we keep training the model after it ships? The appeal is obvious. More data in the field, better model, better patient outcomes, a compounding advantage. The engineering team wants it. The investors want it. The clinical partners want it. And then the Notified Body conversation happens, and the answer turns out to be more complicated than the engineering roadmap assumed.

The tension is real and it is not the Regulation being unreasonable. MDR is built around the idea that a device placed on the market has a defined configuration, that configuration has been assessed, and significant changes to that configuration have to be re-assessed before they reach patients. That principle exists because patient safety depends on knowing what is actually in the field. An algorithm that retrains itself every week is, from the Regulation's point of view, fifty-two different devices a year. The question is how to reconcile the engineering reality of models that can keep learning with the regulatory reality that every configuration reaching patients has to be traceable, risk-assessed, and within the scope of the original conformity assessment.

This post walks through what locked and adaptive mean in practice, how MDR handles each, why adaptive is the unresolved case, and what founders should actually do in 2026. For the broader landscape see the pillar post on AI medical devices under MDR.

Definitions — locked and adaptive, precisely

The terms get used loosely. Here is what they mean in this post, and what they should mean in your technical documentation.

A locked algorithm is one whose behaviour for a given input is determined by a fixed set of parameters that do not change after the algorithm is placed on the market. A new version of the algorithm — retrained, re-tuned, rearchitected — is a new configuration that goes through the manufacturer's change management process. The version that reaches patients is always a version that has been explicitly released. Locked does not mean frozen forever. It means that every change is a deliberate, documented event.

An adaptive algorithm is one whose behaviour for a given input can change after the algorithm is placed on the market, without a deliberate release event. Adaptive covers a range. On one end is a model that retrains its weights nightly on new field data. In the middle is a model that updates only specific calibration parameters in response to observed drift. On the other end is a model that applies per-site fine-tuning when it is installed at a new hospital and then stays fixed there. All of these are "adaptive" in the sense that the model the patient interacts with is not the model that was assessed, but the regulatory treatment of each flavour is different, and the technical documentation has to be precise about which flavour the device actually uses.

A useful test: if your engineer cannot tell you, for any given day in the next year, exactly which model weights will be producing outputs for patients on that day, you are in adaptive territory, and the regulatory implications are serious.

How MDR handles locked algorithms

Locked algorithms fit cleanly into the existing MDR framework. There is no special provision for them because none is needed. The same articles, annexes, and standards that govern any software medical device govern a locked AI device.

Article 2(1) defines what counts as a medical device by intended purpose. A locked AI model that is intended to produce diagnostic or therapeutic information is a medical device on those grounds, the same way any other software is. Article 51 and Annex VIII govern classification; Rule 11 is the rule that catches most AI decision-support and monitoring software and pushes it to Class IIa, IIb, or III depending on how severe the consequences of the supported decision can be. The software lifecycle discipline of EN 62304:2006+A1:2015 applies to the development and maintenance of the locked model, treated as software items with defined safety classes. The risk management process of EN ISO 14971:2019+A11:2021 identifies and controls AI-specific hazards — bias, distribution shift, silent failure, adversarial robustness — the same way it handles any other class of hazard. MDCG 2019-11 Rev.1 (June 2025) is the guidance document on qualification and classification of software and applies to AI software without modification.

Change management for a locked model works through the standard process. A retraining of the model, a change to the preprocessing pipeline, a change to the training data, an architectural change — each of these is a change to the device. The manufacturer assesses the significance of the change under the Notified Body change notification framework. Minor changes are documented in the technical file and proceed; significant changes trigger notification and, where required, re-assessment. The chain from engineering change to regulatory consequence is explicit, auditable, and well understood. Notified Bodies in 2026 are comfortable with this pattern. It is what they have been doing for decades with software devices that happen to use machine learning.

The practical cost for a startup shipping locked is the cadence question. How often can you release? Each release is a change management event, and if each event is expensive, the rate of model improvement in the field is capped by the release cadence, not by the data. Teams solve this by batching improvements into planned release windows, keeping the change control process lean, and choosing which updates are worth the cadence cost.

The unresolved challenge of adaptive algorithms

Adaptive algorithms are where the clean pathway disappears.

The core problem is that MDR does not contain a provision that says "this device may change its behaviour in the field without triggering a change notification." It contains the opposite principle: the configuration that was assessed is the configuration that is on the market, and significant changes have to be re-assessed. An adaptive model that retrains on live data is, in effect, producing an unbounded set of configurations after the conformity assessment, and the Regulation has no clean way to treat that set as "already assessed."

The problem is not that the Regulation is hostile to learning. It is that the Regulation needs, for safety reasons, to know what is actually in use. If the model running today is not the model that was assessed, the conformity assessment is about a device that no longer exists in the field. Every assumption in the risk file, in the clinical evaluation, in the PMS plan, is anchored to a specific configuration. Detach the configuration and the assumptions detach with it.

There is a second problem. Adaptive algorithms can degrade as well as improve. A retraining loop that optimises for a proxy metric — click-through, agreement with prior predictions, label frequency — can drift away from clinical benefit. A model that adapts to field data without isolation can amplify a bias that was absent at certification. A model that updates on small sample sizes can oscillate. None of these failure modes show up on the day the adaptive update is enabled. They show up in the field, in patient harm, weeks or months later. MDR's framework for preventing this kind of silent drift is the conformity assessment and the change control process, and a silently adaptive model routes around both.

For these reasons, in 2026, there is no clean CE marking pathway in the EU for a fully autonomous continuously-learning algorithm that updates its behaviour in the field without a defined envelope. This is not a rumour or an opinion. It is the practical state of play. Anyone telling a founder otherwise is selling them risk.

The change control problem

The change control problem is specific. Under MDR, every change to a certified device has to be classified by significance. Significant changes require notification to the Notified Body, and where they affect safety or performance, re-assessment of the relevant parts of the technical file. Minor changes are documented and proceed. The classification of a change as significant or minor is itself a regulated decision with criteria.

Apply this to an adaptive model. A retraining event is a change. Is it significant? That depends on what the retraining does to the model's behaviour, and the honest answer is that you cannot know without evaluating the new model against the same test battery that the original assessment used. If the behaviour has drifted in a way that affects safety, the change is significant and needs notification. If it has not, it is minor and proceeds. The problem is that making this determination requires test infrastructure, a frozen reference dataset, and a human review process — every time the model changes. At that point the "adaptive" model has effectively become a locked model with a high release cadence and very thorough automated change management, which is actually the workable pattern. The fantasy of free-running adaptation without the change control overhead does not survive contact with the Regulation.

The drift detection problem

Even if the algorithm itself is locked and never changes, the environment in which it operates can change. Input distributions drift. Imaging hardware updates. Patient referral patterns shift with new clinical guidelines. Seasonal disease prevalence moves. The model does not move, but its effective performance in the field does. This is drift in the input sense, and it is a problem for every AI medical device whether the algorithm is locked or adaptive.

MDR Article 83 requires a PMS system proportionate to the risk class and appropriate for the device. For an AI device, appropriate PMS means drift detection. Monitoring of input distributions, monitoring of model outputs against expected distributions, periodic re-evaluation against a held-out reference dataset, clinical outcome tracking where the feasibility and ethics allow it, and a defined response pathway when a threshold is crossed. None of this is optional dressing. It is how the PMS obligation actually gets discharged for a product whose effective performance can degrade silently.

The connection to the locked-versus-adaptive debate is important. The drift detection layer is where teams that want the benefits of adaptation can actually get most of those benefits safely. When drift is detected, the manufacturer does a deliberate retraining, runs the new model through the change control process, releases it as a new version, and updates the fleet. The adaptation happens — but through the change control mechanism, not around it. For most products, this is enough.

The predetermined change control plan concept

For the cases where a periodic release cadence is not enough, there is a pattern that is workable today: the predetermined change control plan.

The idea is to pre-specify, in the technical documentation submitted for the initial conformity assessment, a bounded envelope of changes that the manufacturer is authorised to make in the field without triggering a new assessment. The envelope defines which parameters can change, on what trigger, within what performance bounds, with what revalidation protocol, and with what documentation. The Notified Body assesses the envelope itself — not each future update, but the rules governing the updates — as part of the initial conformity assessment. Each update that falls inside the envelope is then treated as already covered by the original assessment. Each update that falls outside the envelope is a significant change and goes through the normal process.

The mechanism is conceptually similar to the predetermined change control plan approach that has been discussed in international regulatory forums and explored in other jurisdictions. In the EU in 2026, there is no single universally adopted form of this in the MDR text itself, and the practical contours are being worked out between manufacturers, Notified Bodies, and the Medical Device Coordination Group. What is clear is that the envelope has to be specific, defensible, and tied to concrete performance criteria. Vague envelopes ("the model may improve as new data arrives") do not pass Notified Body scrutiny. Specific envelopes ("calibration parameters A, B, C may be updated monthly on condition that AUC on the locked reference set remains within the band [x, y], subgroup performance does not drop by more than z percentage points, and each update is logged with version, data snapshot, and sign-off") can.

Founders exploring this route should plan for significant upfront work and close engagement with the Notified Body. The upside is that once the envelope is approved, field updates inside the envelope are a controlled process rather than a series of change notifications. The downside is that defining the envelope rigorously is as much work as certifying the initial device, and the envelope is only as useful as the discipline of staying inside it.

The EU AI Act adds a further layer of obligations on AI systems in safety-critical use — on documentation, data governance, transparency, and human oversight — and those obligations interact with the change control envelope in ways that are still being clarified. For the framing of how MDR and the AI Act sit together, see the pillar post on AI medical devices under MDR.

What a startup should actually do in 2026

Concrete recommendations for teams that are building AI medical devices right now.

Default to locked. Unless there is a specific, articulable reason you need post-market learning, ship a locked model. The certification path is clean, the Notified Body conversation is understood, the change control process is well-travelled, and your engineering team gets a stable target to test against. Most AI medical devices do not need adaptive behaviour to deliver clinical benefit — they need a well-trained model, honest evaluation, and a release cadence that lets you push improvements on a human timescale.

Plan the release cadence explicitly. How often will you release new model versions? Quarterly? Biannually? Tie this to the engineering reality of how often you expect meaningful improvements. The cadence is part of the QMS design. A startup that ships locked with a six-month release cycle and a disciplined change control process will outrun a startup that tries to ship adaptive and gets stuck in Notified Body review.

Build drift detection into the product. Input distribution monitoring, output monitoring against expected ranges, periodic reference-set evaluation, defined thresholds, alerting. These are engineering features that belong in the product architecture, not documents that belong in the binder. Build them from day one and the PMS plan writes itself against real infrastructure.

If you genuinely need adaptation, scope the envelope first. Start with the smallest possible set of parameters that can change, the tightest possible trigger conditions, and the strictest possible performance guard rails. Talk to the Notified Body before the envelope is locked in the technical documentation. Do not discover in audit that your envelope is too loose to accept. The cost of pre-Notified-Body engagement on this topic is trivial compared to the cost of rework.

Do not confuse MLOps sophistication with regulatory permission. A pipeline that can retrain and deploy a model in thirty minutes is an engineering achievement. It is not a regulatory argument. The Regulation does not care how fast you can retrain; it cares whether the version running on patients has been assessed. The MLOps sophistication is useful exactly when it is wired into a change control process, not when it is wired around one.

The Subtract to Ship angle

The Subtract to Ship framework applied to this question produces a clear subtraction: in most cases, cut the adaptive ambition from the initial CE marking scope. Ship the locked model with a disciplined release cadence and drift monitoring. If the business case genuinely depends on a specific, bounded kind of post-market learning, add exactly that envelope back in — not the general capability for the model to "keep learning," but the specific, defined, justified parameter set that actually moves the clinical needle.

The subtraction is not anti-adaptive. It is anti-vague. Ambiguity about how the model changes in the field is the single most expensive failure mode in AI MedTech regulatory projects. A team that is precise about what is locked, what is adaptable inside a defined envelope, and what is monitored for drift will spend less time in Notified Body review than a team that hopes to figure out the details later. For a broader treatment of why this kind of scoping is the single highest-leverage move in MDR projects, see the post on minimum viable regulatory strategy for MDR.

Reality Check — Where do you stand?

  1. Can you state, for any given day in the next twelve months, exactly which model weights will be producing outputs for patients on your device on that day?
  2. Is your algorithm locked, adaptive, or a mix — and is that distinction written down in the technical documentation with the same clarity you would write it down for an engineer joining the team?
  3. If your algorithm is locked, what is your planned release cadence, and does your QMS change control process match that cadence without becoming the bottleneck?
  4. If you are claiming any adaptive behaviour, have you specified the envelope — parameters, triggers, performance bounds, revalidation protocol — in enough detail that a Notified Body auditor could assess it?
  5. Does your PMS plan include drift detection with defined metrics, thresholds, and a response pathway, or does it rely on complaint handling alone?
  6. Have you engaged with your Notified Body about the change control approach before finalising the technical documentation, or are you planning to discover their position at audit?
  7. If the model changed tomorrow — because a retraining job ran, because a new dataset was added, because a preprocessing step was adjusted — would the change flow through a documented process with sign-offs, or would it just happen?

Frequently Asked Questions

Can I ship a continuously learning AI medical device in the EU in 2026? Not cleanly. MDR is built around a defined device configuration at the point of placing on the market, and a fully autonomous continuously-learning algorithm does not fit that framework without a defined change envelope. The practical pathways are shipping a locked model with a disciplined release cadence or shipping with a predetermined change control plan that pre-specifies which updates can happen inside what bounds.

What exactly is a locked algorithm under MDR? A locked algorithm is one whose behaviour for a given input is determined by a fixed set of parameters that do not change after release, except through deliberate, version-controlled updates that go through the manufacturer's change management process. Locked does not mean frozen forever — it means every change is a documented event, not a silent retraining.

Do locked algorithms still need drift monitoring? Yes. Even a locked model can see its effective performance degrade when the input distribution in the field shifts — new hardware, shifting patient mix, changing clinical guidelines. MDR Article 83 requires a PMS system proportionate to the risk class, and for AI devices that means active drift detection with defined metrics, thresholds, and a response pathway, not passive complaint handling.

What is a predetermined change control plan? It is a document, submitted as part of the technical documentation for initial conformity assessment, that pre-specifies a bounded envelope of changes the manufacturer is authorised to make in the field without triggering a new assessment. The plan defines which parameters can change, on what trigger, within what performance bounds, with what revalidation. The Notified Body assesses the envelope itself up front, and updates inside the envelope are then treated as already covered.

Is a significant retraining of my model always a significant change? Not automatically. Significance depends on the impact of the change on safety and performance, assessed against the criteria used by the Notified Body change notification framework. A retraining that leaves behaviour within the original performance bounds and does not introduce new failure modes can be handled as a minor change; a retraining that changes behaviour materially is significant and needs notification. Either way, the determination has to be made against a reference test battery, not assumed.

How does Rule 11 classification interact with the locked versus adaptive question? It does not change. Annex VIII Rule 11 classifies software by the severity of the decisions the information is used to make. That classification is independent of whether the algorithm is locked or adaptive. A Class IIb decision-support tool is Class IIb whether the model behind it updates once a year or once a week — and either way, the change control and PMS obligations that flow from Class IIb apply in full.

Do MLOps tools for rapid retraining give me any regulatory flexibility? No, on their own. Fast retraining infrastructure is valuable when it is wired into a disciplined change control process. It does not give you permission to push retrained models to patients without the assessment steps the Regulation requires. The sophistication of the MLOps pipeline is a speed advantage for your release cadence once the process is in place, not a substitute for the process.

Sources

  1. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices — Article 2(1) (definition of medical device), Article 51 (classification), Article 83 (post-market surveillance system), Annex VIII Rule 11 (classification of software). Official Journal L 117, 5.5.2017.
  2. MDCG 2019-11 Rev.1 — Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 — MDR and Regulation (EU) 2017/746 — IVDR, October 2019, Revision 1 June 2025.
  3. EN ISO 14971:2019 + A11:2021 — Medical devices — Application of risk management to medical devices.
  4. EN 62304:2006 + A1:2015 — Medical device software — Software life-cycle processes.
  5. Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Referenced for the general framing of the horizontal AI obligations layered on top of MDR; specific article references are intentionally omitted where they are not in our verified ground truth catalog. Founders should consult the official text on EUR-Lex.

This post is part of the AI, Machine Learning and Algorithmic Devices category in the Subtract to Ship: MDR blog. Authored by Felix Lenhard and Tibor Zechmeister. The locked-versus-adaptive question is the one most AI MedTech founders underestimate, and it is the one where the gap between engineering ambition and regulatory reality is the largest. If the shape of the right change control envelope for your specific product is not obvious after reading this post, that is expected — the envelope is a bespoke piece of work, and it is exactly the kind of decision where a sparring partner who has walked other AI MedTech teams through the same conversation with a Notified Body earns their keep.