Secure by design means security is a requirement, not a patch. For MDR-regulated software devices, the up-front cost of building in least privilege, defense in depth, fail-secure behaviour, and rigorous input validation is always lower than the post-certification rework cost of adding them later. Tibor's ROI argument on this is simple: every cybersecurity decision deferred now becomes a change control process with notified body re-engagement later.

By Tibor Zechmeister and Felix Lenhard.

TL;DR

  • MDR Annex I §17.2 requires information security to be addressed during software development, not bolted on at the end.
  • EN IEC 81001-5-1:2022 operationalises this with lifecycle activities that presume security is part of requirements and architecture.
  • Four principles carry most of the weight for small teams: least privilege, defense in depth, fail secure, and input validation.
  • The ROI argument Tibor uses with founders: up-front cost is always lower than post-certification rework. A change-control loop with a notified body for a security mitigation can consume weeks that early design would have consumed in hours.
  • Secure by design is not a branding exercise. The evidence is in requirements traceability, architecture documentation, and verification records under EN 62304:2006+A1:2015.
  • Startups that adopt these principles during architecture rarely fail their first cybersecurity-focused notified body question. Startups that defer them almost always do.

Why this matters

Tibor has audited a lot of MedTech startups. The cybersecurity pattern is depressingly consistent. The device works. The clinical evidence is reasonable. The QMS holds. The risk file references EN ISO 14971:2019+A11:2021 correctly. And then the notified body asks, "how did you incorporate security into your software architecture?" The answer is often a silence that the startup cannot afford.

What makes secure by design a startup issue, not a big-company issue, is the arithmetic. A large manufacturer rewriting an authentication layer six months after architectural freeze is an expensive but survivable event. A three-person startup doing the same thing at month eighteen burns a runway they did not have in the first place. For founders, the question is never "is security worth it". It is "when does it cost the least".

Felix coached a startup last year that shipped their first handheld wearable firmware with a hard-coded root password because the team "needed to be able to debug devices in the field". The fix, once discovered, was not the code change. The code change took an afternoon. The fix was the change control, the re-verification, the updated technical documentation, the amended threat model, and the notified body conversation about whether the hard-coded password was a field action. Six weeks. None of that cost would have existed if the original software requirement had said "no authentication secret shall be hard-coded in firmware".

What MDR actually says

MDR Annex I GSPR 17.2 requires software to be developed and manufactured in accordance with the state of the art, taking into account the principles of development life cycle, risk management, including information security, and verification and validation. The phrase "including information security" is the hinge. It is not optional. It is not a separate regime. It is part of the software lifecycle.

GSPR 17.4 reinforces this by requiring manufacturers to set out minimum requirements concerning hardware, IT networks characteristics, and IT security measures, including protection against unauthorised access, necessary to run the software as intended.

MDCG 2019-16 Rev.1 maps these expectations to EN IEC 81001-5-1:2022, which is the reference standard for health software security activities across the product life cycle. The guidance makes explicit that security design activities belong in the earliest phases of development, alongside requirements elicitation and architecture. Tibor's practical reading: an auditor who sees security as an appendix to the technical documentation will read that as a manufacturer who treats security as an afterthought, regardless of the code quality.

A worked example

Consider a pulse oximeter with Wi-Fi connectivity and a companion cloud service. A startup of four engineers has twelve months of runway and a target of CE mark in ten. The team sits down to write software requirements.

A first pass produces requirements like "the device shall authenticate to the cloud". This is the requirement shape Tibor sees most often, and it is almost useless. It is untestable, unarchitecturally binding, and does not distinguish between a hard-coded API key and an attested mutual TLS handshake.

A secure-by-design rewrite of the same requirement produces four testable children.

  • Least privilege. The cloud credential on the device shall grant access only to the device's own telemetry endpoint, with no ability to read or modify other devices' data. Verification: negative test from a device attempting to access another device's records must return 403.
  • Defense in depth. Cloud communication shall use transport-layer TLS 1.3 with certificate pinning, and application-layer message authentication with a per-device key. Verification: disabling either layer shall produce a test failure in integration testing.
  • Fail secure. If the cloud is unreachable, the device shall continue to measure and store readings locally, shall not fall back to unauthenticated transport, and shall clearly indicate unsynchronised state to the user. Verification: simulated cloud outage test case.
  • Input validation. Every payload received from the cloud shall be validated against a defined schema before being acted on, with invalid payloads rejected and logged. Verification: fuzz testing with out-of-schema payloads.

These four requirements cost the team about four hours to write and about two days to implement. They map directly to mitigations in the threat model and to hazardous situations in the ISO 14971 risk file. They are testable. They are defensible. When the notified body asks "how did you incorporate security into your software architecture", the team has a one-sentence answer and a traceability matrix.

The same startup, without secure-by-design, would have written a single vague requirement, shipped a working device, and discovered all four of these gaps during pre-audit. The fix would have been three sprints instead of three days, at a point in the runway where three sprints might not exist.

The Subtract to Ship playbook

Principle 1. Least privilege, applied ruthlessly. Every credential, every API key, every service account grants the minimum access required to function and nothing more. A device credential that can read its own data only is a credential whose compromise is a contained event. A credential that can read everything is a credential whose compromise is a front-page story. Least privilege is the single cheapest security principle and the most under-used.

Principle 2. Defense in depth, two layers minimum. No single control carries the full weight of the device's security. Transport encryption plus application-layer authentication. Network segmentation plus host firewall. Code signing plus secure boot. The reason is not paranoia, it is probability. Every control has a non-zero failure rate. Two independent controls multiply, which is why the numbers work.

Principle 3. Fail secure, not fail open. When something goes wrong, the default behaviour must preserve security, not preserve availability at the cost of security. If an authentication server is unreachable, the device does not fall through to unauthenticated mode. It surfaces the degradation to the user and holds the line. Fail secure interacts with safety: for a life-supporting device, "hold the line" must be designed carefully so that it does not become a therapy interruption. That design work is risk management, which is exactly where secure by design lives.

Principle 4. Input validation, everywhere, always. Every external input is untrusted until validated. Every external input includes sensor streams, API payloads, configuration files, firmware updates, USB data, BLE packets, and user touches on a screen. Input validation is the control that catches the vulnerabilities nobody anticipated, because it blocks the class of unexpected input rather than the specific exploit.

Implementation steps for small teams.

  1. Write security requirements at the same time as functional requirements, not after. The same requirements document, the same sprint, the same reviews.
  2. Make least privilege, defense in depth, fail secure, and input validation explicit acceptance criteria on user stories that touch any trust boundary.
  3. Add a security review gate to architecture decisions. One hour with the team, a threat model diff, a go or no-go.
  4. Treat SOUP dependency updates as architectural events, because they are. A new library version can change the security posture of the entire device.
  5. Verify security requirements with automated tests whenever possible. Manual security verification in a four-person team drifts the moment the tester leaves the room.
  6. Document traceability from security requirement to design output to verification result. This is what EN 62304:2006+A1:2015 asks for, and it is what the notified body will read.

Tibor's ROI framing for founders who push back on the up-front effort: every hour spent on secure by design during architecture saves roughly five to ten hours of rework during pre-audit, and avoids the change control loop with the notified body entirely. The pushback always ends the same way: the founders who invest early end up with shorter audits and fewer findings. The founders who defer end up renegotiating timelines they already committed to investors.

Reality Check

  1. Can you point to the specific software requirements that encode least privilege, defense in depth, fail secure, and input validation for your device, or are they implied?
  2. Do your requirements traceability matrices show a link from every security requirement to a design output and a verification result?
  3. If the cloud backend disappeared tomorrow, would your device fail secure, or would it degrade in a way that compromises either safety or security?
  4. When was the last time a SOUP dependency update triggered a review of your security architecture?
  5. Are your security requirements testable, or do they read as aspirations?
  6. Does your notified-body-facing documentation make clear that security was considered during architecture, not after?
  7. If Tibor walked into your next audit and asked "where does your architecture document address information security", could you show him a paragraph, or would you reach for a separate cybersecurity appendix?
  8. For every credential in your device, can you describe the scope of damage if that credential leaked?

Frequently Asked Questions

Is "secure by design" a formal MDR requirement? Not in those exact words. MDR Annex I §17.2 requires information security to be part of the software lifecycle, and EN IEC 81001-5-1:2022 operationalises this with activities that begin at requirements and architecture. Secure by design is the industry name for doing what the standard and the MDR already require.

Can a Class I device skip secure by design? No. The obligation in Annex I §17.2 applies to all software devices. Lower-class devices scale the depth and formality, but the four principles in this post are cheap enough that skipping them is rarely a real cost saving.

How much will secure by design cost a three-person startup? In Tibor's experience, applying the four principles during architecture and early development adds roughly five to ten percent to development effort in the first quarter, and saves substantially more than that during pre-audit and certification. The cost profile is front-loaded, which is the opposite of what founders instinctively want, and the reason Felix's ROI argument matters.

What happens if we discover a missing principle after certification? A change control process, a threat model update, a risk file update, a verification update, and a notified body re-engagement whose scope depends on significance. This is the path Tibor walks startups through several times a year. It is never fun.

Do the four principles replace a threat model? No. The principles are design guidance. The threat model is the process that tells you which principles matter most at which trust boundaries. Both are required.

Sources

  1. Regulation (EU) 2017/745 on medical devices, consolidated text. Annex I, GSPR 17.2 and 17.4.
  2. MDCG 2019-16 Rev.1 (December 2019, Rev.1 July 2020), Guidance on Cybersecurity for medical devices.
  3. EN IEC 81001-5-1:2022, Health software and health IT systems safety, effectiveness and security, Part 5-1: Security, Activities in the product life cycle.
  4. EN ISO 14971:2019+A11:2021, Medical devices, Application of risk management to medical devices.
  5. EN 62304:2006+A1:2015, Medical device software, Software life cycle processes.