---
title: What Is Usability Engineering for Medical Devices? A Startup Introduction
description: Usability engineering medical devices startup primer: what EN 62366-1 actually requires and why a group review is not a summative evaluation.
authors: Tibor Zechmeister, Felix Lenhard
category: Usability Under MDR
primary_keyword: usability engineering medical devices startup
canonical_url: https://zechmeister-solutions.com/en/blog/usability-engineering-medical-devices-startup
source: zechmeister-solutions.com
license: All rights reserved. Content may be cited with attribution and a link to the canonical URL.
---

# What Is Usability Engineering for Medical Devices? A Startup Introduction

*By Tibor Zechmeister (EU MDR Expert, Notified Body Lead Auditor) and Felix Lenhard.*

> **Usability engineering for medical devices is the structured, evidence-producing process defined in EN 62366-1:2015+A1:2020 that identifies use-related hazards, designs them out of the user interface, and proves through recruited-user testing that the remaining residual risks are acceptable. It is not the same as a group review where the development team passes the prototype around the table and decides it feels intuitive.**

**By Tibor Zechmeister and Felix Lenhard.**

## TL;DR
- Usability engineering is a formal process defined by EN 62366-1:2015+A1:2020, not a design opinion exchanged over coffee.
- MDR Annex I §5 requires ergonomic design and reduction of use-related risks; Annex I §22 adds stricter requirements for devices intended for lay users.
- A group review of a prototype is formative at best. Calling it a summative evaluation is one of the most common reasons startups receive nonconformities.
- A credible summative evaluation requires recruited representative users, a real or simulated use environment, recorded observations, and documented outcomes.
- The use specification document is the most skipped and most important artefact. Without it, hazard analysis misses scenarios.
- For connected devices, the app side of the use experience belongs inside the same EN 62366-1 process as the hardware.

## Why usability engineering matters before the first prototype ships

Tibor remembers a handheld device whose display had been quietly optimised for left-hand use, because the engineering team happened to be dominated by left-handers. The team never noticed. It was only during summative evaluation with recruited users that the pattern became visible: right-handed users saw the display upside-down. The fix was not expensive in the end (a software iteration that let the user flip orientation), but the lesson was that a development team cannot see its own blind spots without structured user testing. That is exactly the gap usability engineering exists to close.

For a startup, the business case is straightforward. Fixing a use-related hazard on paper during the use specification phase costs a few meetings. Fixing it after summative evaluation costs a design iteration. Fixing it after the device is on the market costs a full change control procedure, notified body engagement at the next surveillance audit, and the reputational cost of a field safety corrective action. The further right on the timeline the finding lands, the more expensive it becomes.

Felix has watched the same pattern across the 44 startups he has coached through the Subtract to Ship methodology. The founders who treat usability engineering as a box to tick at the end of development invariably discover, too late, that it was supposed to shape the architecture of the user interface from the beginning. The ones who plan it early gain a second, unexpected benefit: the recruited users are often willing to become early customers when the engagement is handled ethically and transparently.

## What EN 62366-1 actually asks of a manufacturer

The standard EN 62366-1:2015+A1:2020 is titled *Medical devices, Part 1: Application of usability engineering to medical devices*. It is referenced by MDR Annex I for the ergonomic and lay-user requirements, and its use gives a manufacturer a presumption of conformity with those general safety and performance requirements. What follows is a plain-language summary of what the standard requires.

First, the manufacturer produces a **use specification**. This document describes who the intended users are, what the intended use environment is, what the user interface consists of, what operational steps are performed, and what the intended medical indication is. Tibor emphasises that this document is the most-skipped and most-important artefact in the whole process. The right approach is to divide and conquer: do not simply write "the clinician uses the device". Decompose the use into every real-world procedure the device will be subjected to, including cleaning, transport, initial installation, normal operation, and edge cases. Granular procedures make hazards visible. Without that decomposition, hazard analysis will miss scenarios.

Second, the manufacturer identifies **hazard-related use scenarios**. These are the specific sequences of user action, or inaction, that could lead to harm. A classic example from Tibor's audit experience involves a tongue-controlled wheelchair for quadriplegic patients. The colour of the mouthpiece had been chosen during design and tested indoors. It was only during audit that the question "what about outdoor use?" surfaced. The chosen colour was exactly what attracts bees. A patient using the wheelchair outdoors could have been stung on the mouthpiece. A use specification that covered only indoor use missed the outdoor hazard entirely. The fix was a colour change plus an explicit documented outdoor-use scenario.

Third, the manufacturer runs **formative evaluations** during development. Formative means early, iterative, diagnostic. The point is to find problems while they are still cheap to fix. The outputs feed back into the user interface design.

Fourth, the manufacturer runs a **summative evaluation** before release. This is the formal, final, evidence-producing test. It is the one most commonly done incorrectly.

## The group-review trap: when a review is not a summative

In Tibor's notified body practice, the single most common usability nonconformity looks like this. A startup runs a group review. The development team passes the prototype around the table. Everyone agrees the device is intuitive. The team writes a short memo, files it under "usability testing", and tells the notified body that summative evaluation is complete. The notified body pushes back and demands a real summative evaluation. The clock resets.

A real summative evaluation, as EN 62366-1 intends it, has four non-negotiable ingredients. Recruited users who are representative of the intended user group, not colleagues, not sales staff, not friendly customers, not key opinion leaders. A real or simulated use environment that reflects the intended use context, not the engineering office. Recorded observations, ideally video and structured notes, so an auditor can see what happened. Documented outcomes tied back to the hazard-related use scenarios identified earlier in the process.

Tibor sees startups save money in exactly the wrong place by recruiting engineers, sales staff, or key opinion leaders as test participants. Engineers are too skilled and too familiar with the device, and the device seems intuitive to them. Sales staff know every feature by heart because they demonstrate it daily. Key opinion leaders are clinical experts far above the skill level of a representative home user. The real user, who might be a 70-year-old managing a chronic condition at home, cannot use the device at all. That mismatch surfaces after market entry, triggers change control, and costs more than recruiting the right users would have cost in the first place.

## A worked example: the connected glucose tracker

Consider a fictional early-stage startup building a connected glucose tracker. The hardware is a small handheld reader. The app pairs over Bluetooth and displays results. The intended user is a type-2 diabetes patient managing the condition at home, age range 55 to 80.

A group review by the four-person engineering team concludes that the device is intuitive. Onboarding takes 90 seconds. The team confidently tells the notified body that summative is done. The notified body asks three questions. Who were the test participants? What was the use environment? Where are the recorded outcomes tied to the hazard-related use scenarios? The startup has answers to none of these, because it ran a group review, not a summative.

The startup restarts. It writes a proper use specification that decomposes the device experience into unboxing, initial app download, Bluetooth pairing, first measurement, routine measurement, replacing the consumable, cleaning, and charging. It identifies hazard-related use scenarios: a user who fails to pair the device and believes their glucose is fine when it was not measured, a user who misreads the colour-coded result under poor lighting, a user who replaces the consumable incorrectly and invalidates the next three measurements.

It runs formative testing with six recruited participants. Two find the pairing step confusing. The startup redesigns the onboarding. It then runs a summative evaluation with 15 recruited users in a simulated home environment, with video capture and structured task scoring. The notified body accepts the evidence. Total elapsed time: roughly three months longer than the original plan. Total cost: less than the change control that would have followed a post-market failure.

## The Subtract to Ship playbook for usability engineering

Felix's Subtract to Ship approach reduces usability engineering to its essential moves for an early-stage team with limited cash.

Step one. Write the use specification before writing the requirements. Decompose the user experience into granular procedures. Treat the document as the foundation of both the usability file and the risk file under EN ISO 14971:2019+A11:2021.

Step two. Fold the app interface, if there is one, into the same process as the hardware. An 80-year-old user who cannot download and configure the app safely creates exactly the same clinical risk as a user who misuses a physical control. EN 62366-1 does not care whether the interface is plastic or pixels.

Step three. Run formative evaluations early and often. Three participants on a Tuesday afternoon is better than fifteen participants the week before submission. Formative testing is cheap, diagnostic, and designed to find problems while the design can still absorb them.

Step four. Budget for a real summative evaluation. For mobile, handheld, or software-only devices, a simulated environment can genuinely save money without corner-cutting. For devices that require real clinical or hospital conditions, there is no lean alternative. Recruit representative users, record observations, and tie outcomes back to hazard-related use scenarios.

Step five. Test the instructions for use the same way the device is tested. The worst pattern Tibor sees is an intuitive device wrapped in a 150-page manual that nobody can read. The best instructions-for-use test is to give the recruited user the device and the instructions, offer no coaching, and watch them try to complete the task. If they cannot, the instructions failed and they need to be rewritten.

## Reality Check

1. Can the team produce a use specification that decomposes the device experience into at least six named procedures, including cleaning and edge cases?
2. Has every identified hazard-related use scenario been traced into the risk management file under EN ISO 14971:2019+A11:2021?
3. If the device has an app, is the app inside the EN 62366-1 process or sitting in a separate "it's just software" bucket?
4. Were the planned summative participants recruited from the intended user group, or from the team's immediate network?
5. Will the summative evaluation be held in a real or simulated use environment that resembles how the device will actually be used?
6. Are the observations going to be recorded in a way an auditor can review, rather than captured in a memory of the room?
7. Has the instructions-for-use document been tested with a representative user and no coaching?
8. If a hazard-related use scenario is discovered during summative, does the QMS have a change-control path that does not require restarting the whole process?

## Frequently Asked Questions

**Is usability engineering mandatory for Class I devices under MDR?**
Yes. MDR Annex I §5 applies to every device regardless of class. A Class I manufacturer still has to demonstrate that use-related risks have been reduced as far as possible. Using EN 62366-1:2015+A1:2020 gives presumption of conformity.

**Can a startup use its own employees for formative evaluations?**
Formative evaluations are exploratory and internal participants can be used cautiously, but the output is weak evidence. Summative evaluation must use recruited users representative of the intended user group. Employees are not acceptable participants for summative testing.

**How many participants does a summative evaluation need?**
EN 62366-1 does not prescribe a fixed number. Fifteen representative users is a widely used figure for moderate-complexity devices, but the justification must tie to the hazard-related use scenarios and the expected user distribution.

**Does usability engineering apply to software-only medical devices?**
Yes. The interface is pixels rather than plastic, but the obligation is identical. Software-only devices still have to run a use specification, hazard-related use scenario analysis, formative evaluations, and a summative evaluation.

**What happens if the notified body rejects a summative evaluation?**
The manufacturer reruns it with the gaps fixed: recruited users, proper environment, recorded observations, outcomes tied to hazards. This is the most common route to a several-month delay and is avoidable by planning summative properly the first time.

**Is training a valid way to close a use-related risk?**
It is the last resort. EN 62366-1 and EN ISO 14971:2019+A11:2021 both expect the manufacturer to design out the hazard first, guard against it second, and only use information or training when the first two options are exhausted.

## Related reading
- [MDR Usability Requirements: How IEC 62366-1 Helps You Demonstrate Conformity](/blog/mdr-usability-iec-62366-1-conformity). How the standard maps to Annex I presumption of conformity.
- [MDR Annex I Usability Requirements: What the Regulation Demands](/blog/mdr-annex-i-usability-requirements). The regulatory text in §5 and §22 in plain language.
- [IEC 60601-1-6 Usability Cross-Reference](/blog/iec-60601-1-6-usability-cross-reference). How the collateral standard for electrical equipment hooks into EN 62366-1.
- [MDR Annex I GSPR Primer](/blog/mdr-annex-i-gspr). Where usability fits inside the general safety and performance requirements.

## Sources
1. Regulation (EU) 2017/745 on medical devices, consolidated text. Annex I §5 (ergonomics) and §22 (devices for lay persons).
2. EN 62366-1:2015+A1:2020. Medical devices, Part 1: Application of usability engineering to medical devices.
3. EN ISO 14971:2019+A11:2021. Medical devices, Application of risk management to medical devices.

---

*This post is part of the [Usability Under MDR](https://zechmeister-solutions.com/en/blog/category/usability) cluster in the [Subtract to Ship: MDR Blog](https://zechmeister-solutions.com/en/blog). For EU MDR certification consulting, see [zechmeister-solutions.com](https://zechmeister-solutions.com).*
