---
title: Secure Software Update Mechanisms for Medical Devices
description: How medical device software updates should be signed, verified, staged, and rolled back under MDR Annex I and EN IEC 81001-5-1:2022.
authors: Tibor Zechmeister, Felix Lenhard
category: IVDR & In Vitro Diagnostics
primary_keyword: secure software update medical device
canonical_url: https://zechmeister-solutions.com/en/blog/secure-software-update-mechanisms
source: zechmeister-solutions.com
license: All rights reserved. Content may be cited with attribution and a link to the canonical URL.
---

# Secure Software Update Mechanisms for Medical Devices

*By Tibor Zechmeister (EU MDR Expert, Notified Body Lead Auditor) and Felix Lenhard.*

> **A medical device software update mechanism is itself a safety control. Updates must be cryptographically signed, their integrity verified before installation, rolled out in stages, and reversible when something goes wrong. MDR Annex I §17.4 and EN IEC 81001-5-1:2022 treat the update path as part of the security posture of the device, not an ops feature.**

**By Tibor Zechmeister and Felix Lenhard.**

## TL;DR
- An unsigned, unverified update channel is an open door into every deployed device. Attackers know it, notified bodies know it, and EN IEC 81001-5-1:2022 treats it as a core lifecycle activity.
- Code signing, integrity verification, and staged rollout are the three controls that make updates safe to deliver at scale.
- Rollback is not optional. Every update must have a documented, tested path back to the previous known-good state.
- Post-market vulnerability response only works if the update mechanism itself is reliable. Tibor has seen the case where it was not.
- Update security connects the pre-market file (Annex I §17.4) to the post-market obligations under MDR Articles 83 to 86.
- Startups who design the update mechanism as part of the device from day one ship faster than those who bolt it on after certification.

## Why the update mechanism is the attack surface that matters

When Tibor looks at the cybersecurity posture of a connected medical device, the update mechanism is the first thing he checks. The logic is simple. If the update channel is compromised, every other control on the device is compromised with it. Attackers do not need to defeat the encryption, the authentication, or the access controls when they can just push a malicious update that disables all three.

This is not theoretical. It is Tibor's Round 2 story about post-market vulnerability response. A library used in a medical device had a publicly exploited vulnerability. It took the manufacturer several weeks to even notice. They eventually patched it through a software change and change control. There was no patient harm in that case, but the window was open long enough for something serious to have happened. The only thing standing between that vulnerability and patients was the speed and reliability of the update mechanism.

Felix frames the lesson differently for his coached startups. A medical device without a safe update path is a device that cannot respond to the world changing around it. Every library has vulnerabilities waiting to be discovered. Every protocol eventually gets deprecated. Every certificate expires. A device that cannot receive a verified, safe update is a device that starts ageing the day it ships. The update mechanism is not a feature of the device. It is a fundamental property of whether the device stays safe over its intended lifetime.

## What MDR actually says about software updates

MDR Annex I §17.2 requires that software developed for medical devices follows a lifecycle that takes account of the principles of development, risk management, information security, verification, and validation. Software updates are part of that lifecycle, not outside it.

MDR Annex I §17.4 requires the manufacturer to set out minimum requirements concerning IT security measures, including protection against unauthorised access, necessary to run the software as intended. Unauthorised modification of the software is an unauthorised access case. If the update channel cannot prove that an update package is authentic and unmodified, the §17.4 obligation is not satisfied.

The reference standard is EN IEC 81001-5-1:2022. It is harmonised under the MDR for cybersecurity of health software and expects the manufacturer to address software updates across the lifecycle: secure design of the update mechanism, verification of its function, and post-market maintenance of its security over time. MDCG 2019-16 Rev.1 interprets these obligations in the MDR context and is explicit that the update path is inside scope of the cybersecurity risk assessment.

Depending on the nature of the change being delivered, an update can also trigger MDR change control obligations under Article 120 and the rules on significant change. That is a separate topic from the security of the update mechanism itself, but the two meet in the same documentation.

## The three controls that matter

A safe update mechanism rests on three technical controls. Every notified body audit Tibor has seen looks for evidence of all three.

**Cryptographic signing.** The manufacturer signs every update package with a private key that it controls. The device holds the matching public key in a way that cannot be trivially modified. Every installed package must present a valid signature from a key the device trusts. The key material and the signing process are part of the device build infrastructure and are documented under EN 62304:2006+A1:2015 configuration management.

**Integrity verification.** Signature verification is not enough on its own. The device must verify the integrity of the package as it arrives, before writing it to persistent storage, and again after installation before handing control to the new code. Any integrity failure aborts the update and the device keeps running the previous version. The verification logic itself is a safety-relevant component and must be tested to the same standard as the clinical functionality.

**Rollback path.** Every update must be reversible to the previous known-good version. Rollback is triggered automatically when verification fails, when the new version fails a post-install self-check, or when post-market monitoring flags a regression. The rollback procedure is documented, tested, and included in the technical file. A device that cannot roll back is a device where a bad update becomes a field safety corrective action.

## Staged rollout: what "responsible" really looks like

Pushing an update to 100 percent of the installed base at once is the single most common mistake Tibor sees in startups who have not yet been audited. It is also the easiest mistake to fix. A staged rollout replaces the "big bang" with a sequence.

First, an internal ring of developer and engineering devices that receive every build. Second, a small external ring of friendly sites or beta users, ideally under a documented validation agreement. Third, a small percentage of the production fleet, randomly or geographically selected, with automated telemetry watching for anomalies. Fourth, progressive expansion to the full installed base over days or weeks, gated by explicit go/no-go decisions.

Each ring has entry criteria and exit criteria. Entry criterion for the production ring might be zero regressions in the beta ring for 72 hours. Exit criterion to expand the fraction might be a stable error rate and no new complaint signals in post-market surveillance. The decision log is part of the change control record and is exactly the kind of document a notified body expects to see during a PMS audit under MDR Articles 83 to 86.

Felix puts it plainly when coaching startups: a staged rollout is not an operational nicety, it is a risk control. The Subtract to Ship principle says to remove everything that is not essential, but staged rollout is essential because it is the difference between catching a regression on 50 devices and catching it on 50,000.

## A worked example: patching the vulnerability Tibor described

Take Tibor's case. A widely used library inside a medical device has a publicly disclosed vulnerability. The manufacturer has two jobs: patch it, and prove the patch is safe. Here is how the two jobs run in parallel on a well-designed update mechanism.

The SBOM generated from the EN 62304:2006+A1:2015 configuration item list flags the vulnerable library automatically. The cybersecurity risk file is updated with the new threat, severity, and exploitability. The fix is prepared as a normal software change, verified against the affected scenarios, and regression-tested.

The update package is built in the usual pipeline. It is signed with the production code signing key. The signature and a content hash go into the release record. The package is pushed first to the internal ring, then to the beta ring, then to a small slice of production. Telemetry is watched. If anything goes wrong, the devices automatically roll back to the previous version and the expansion pauses.

Once telemetry is stable, the rollout expands in stages until the whole fleet is patched. The entire exercise is documented as part of the PMS file under MDR Articles 83 to 86 and, if the vulnerability rises to the level of a serious incident or field safety corrective action, also under the vigilance obligations in Articles 87 to 92.

Now imagine the alternative: a device where updates are unsigned, where there is no rollback, and where the rollout is an all-or-nothing push. The same patch becomes a one-shot, bet-the-company event. Most startups would delay the patch out of fear of breaking the field, leaving the vulnerability open longer. The unsafe update mechanism makes the safe decision harder. That is the real cost of a weak update path.

## The Subtract to Ship playbook

Startups who want a safe update mechanism without over-engineering follow a small number of steps. Felix has seen this play out across several coached cohorts.

Design the update mechanism before the clinical features. The signing infrastructure, the verification logic, and the rollback path all belong in the first architecture sprint, not the last. They are easier to add early and almost impossible to retrofit cleanly.

Use managed infrastructure for key storage. Code signing keys live in a hardware-backed key management service or a hardware security module. No engineer has a copy. Access to sign a release is logged and requires approval. This satisfies the auditor and protects the startup from its own bad day.

Make rollback the default for any verification or self-check failure. The device does not ask a human what to do when a new build fails a post-install check. It rolls back. A human investigates afterwards.

Automate the SBOM refresh on every release. The SBOM, as Tibor has noted, should fall naturally out of the EN 62304:2006+A1:2015 configuration item list. It is not a separate document, it is what the configuration list should have been all along. Every release builds a new SBOM, every SBOM feeds vulnerability monitoring, every vulnerability that matters triggers an update.

Rehearse the patch path before the audit. Push a fake vulnerability through the whole process: SBOM detection, risk file update, build, sign, stage, roll out, roll back, and document. The rehearsal record is one of the most convincing pieces of evidence a startup can offer at a cybersecurity audit.

Treat the update mechanism itself as a change-controlled component. When you change how the update mechanism works, that change is itself subject to EN 62304:2006+A1:2015 and to the cybersecurity lifecycle of EN IEC 81001-5-1:2022. The mechanism that changes the device cannot itself be changed casually.

## Reality Check

- If you had to patch a publicly disclosed library vulnerability tomorrow, how long would it take from detection to full fleet update?
- Is every update you ship cryptographically signed, and is the signing key stored in hardware-backed infrastructure?
- Can your device verify the integrity of an update before installing it and again after installing it?
- If a new build fails its post-install self-check, does the device automatically roll back without human intervention?
- Do you have a documented staged rollout with entry and exit criteria for each ring?
- Is your SBOM generated from your configuration item list automatically on every release, or is it a manual spreadsheet?
- Have you rehearsed the full patch path end-to-end, and is the rehearsal record in your PMS file?

## Frequently Asked Questions

**Does every software update need a new conformity assessment?**
Not every update. Routine security patches that do not change intended use or introduce new risks generally do not. Significant changes under Article 120 and related MDCG guidance do. The MDR significant change rules and MDCG 2020-3 guidance determine which category a specific update falls into.

**What key length should I use for code signing?**
State of the art for code signing today includes 3072-bit or 4096-bit RSA, or 256-bit ECDSA with a current curve. The specific choice must be justified against EN IEC 81001-5-1:2022 and the threat model, not copied from a blog post.

**Can I rely on the operating system's update mechanism instead of building my own?**
Sometimes, if the OS update mechanism meets your requirements and you document that it does. You still have to verify its properties, integrate it into your risk file, and take responsibility for its security as part of the device. Delegation to an upstream mechanism does not delegate the obligation.

**What happens if the rollback itself fails?**
That is the worst failure mode. A robust mechanism has an A/B partition scheme or equivalent so the old version is still physically present when the new version is activated. If the new version fails, switching back is atomic. The rollback is tested as part of verification.

**How does the update mechanism connect to vigilance reporting?**
If a vulnerability meets the thresholds for a serious incident or a field safety corrective action under MDR Articles 87 to 92, the manufacturer reports through the normal vigilance channels. The update is the corrective action, but the reporting obligation runs in parallel and uses the same underlying evidence.

**Do we need pentesting of the update mechanism?**
It is one of the highest-value pentest scopes. The update channel concentrates risk, so external evidence that it resists attack is disproportionately valuable to the notified body.

## Related reading
- [Software Updates under MDR and Conformity Assessment](/blog/software-updates-mdr-new-conformity-assessment).  when an update triggers a new conformity assessment.
- [Cybersecurity Risk Management under MDR](/blog/cybersecurity-risk-management-mdr).  how update mechanism risks integrate with the main risk file.
- [SBOM for Medical Devices under MDR](/blog/sbom-medical-devices-mdr).  the foundation of vulnerability detection that drives the update cycle.
- [Data Encryption for Medical Devices](/blog/data-encryption-mdr-gdpr).  the encryption controls that update packages inherit.

## Sources
1. Regulation (EU) 2017/745 on medical devices, consolidated text. Annex I §17.2 and §17.4; Articles 83 to 86 on PMS; Articles 87 to 92 on vigilance.
2. MDCG 2019-16 Rev.1, Guidance on Cybersecurity for Medical Devices, July 2020.
3. EN IEC 81001-5-1:2022, Health software and health IT systems safety, effectiveness and security, Part 5-1: Security, activities in the product lifecycle.
4. EN 62304:2006+A1:2015, Medical device software, Software life cycle processes.
5. EN ISO 14971:2019+A11:2021, Medical devices, Application of risk management to medical devices.

---

*This post is part of the [IVDR & In Vitro Diagnostics](https://zechmeister-solutions.com/en/blog/category/ivdr) cluster in the [Subtract to Ship: MDR Blog](https://zechmeister-solutions.com/en/blog). For EU MDR certification consulting, see [zechmeister-solutions.com](https://zechmeister-solutions.com).*
