The EN 62366-1:2015+A1:2020 usability engineering process is a sequence of five linked deliverables: a use specification, a user interface specification, formative evaluation, summative evaluation, and a usability engineering file that ties everything together. It is not a survey. It is not a user test. It is a closed-loop process that starts with defining who the user is and ends with documented evidence that the final device is safe in their hands.
By Tibor Zechmeister and Felix Lenhard.
TL;DR
- EN 62366-1:2015+A1:2020 defines a five-step process: use specification, user interface specification, formative evaluation, summative evaluation, usability engineering file.
- Each step feeds the next. Skipping the use specification undermines every downstream step, including the summative evaluation a notified body will actually scrutinise.
- Formative evaluation is iterative and low-stakes. Summative evaluation is the validation event that must use recruited representative users in a realistic or simulated environment.
- MDR Annex I GSPR §5 and §22 are the regulatory anchors for use-related risk and lay user performance.
- Use-related risk is integrated with EN ISO 14971:2019+A11:2021 risk management. It is not a parallel process.
- The usability engineering file is the single record that a notified body audits. One file, one narrative, traceable from use specification to summative results.
Why the process matters (Hook)
Tibor has seen more startups fail a usability audit on process than on product. The device is often well designed. The team is often thoughtful. What fails is the absence of an end-to-end process that a notified body auditor can follow from the first page of the use specification to the last page of the summative evaluation report.
The failure mode is consistent. A founder treats each deliverable as an independent document. The use specification is written by a regulatory consultant in one week. The user interface specification is written by an engineer in another week with no reference to the use specification. Formative evaluation is a series of informal demos at trade shows. Summative evaluation is a group review in the office with two friendly users. The notified body sees five documents that do not connect and writes a finding that translates, in plain language, to "you did not run the process".
The other failure mode Felix sees repeatedly is the opposite: a team builds a research-grade usability program that would satisfy a university human factors department but takes two years to complete. The startup runs out of runway before summative evaluation. The process is gold-plated rather than lean.
EN 62366-1 is not a research framework. It is a compliance framework that requires discipline, traceability, and a closed loop. This post walks the full loop, in the order the standard prescribes, with a view to what a Subtract to Ship team can credibly do.
What MDR actually says (Surface)
MDR does not describe the usability process step by step. It sets the performance obligations in Annex I and leaves the process to the harmonised standard.
Annex I GSPR §5 requires that manufacturers reduce, as far as possible, risks linked to the ergonomic features of the device and the environment in which it is used, taking into account technical knowledge, experience, education, training, and the medical and physical conditions of intended users. That sentence is the regulatory mandate behind the entire EN 62366-1 process.
Annex I GSPR §22 requires devices intended to be used by lay persons to perform appropriately for the skills and means available to such users and to minimise the risk of incorrect handling.
Neither clause names a standard. Both clauses are closed out, in practice, by following EN 62366-1:2015+A1:2020. Compliance with the harmonised standard provides presumption of conformity to the relevant parts of Annex I. The process itself lives in clauses 5.1 through 5.9 of the standard and can be summarised as:
- Prepare the use specification (clause 5.1).
- Identify user interface characteristics related to safety and potential use errors (clause 5.2).
- Identify known or foreseeable hazards and hazardous situations, in cooperation with risk management under EN ISO 14971:2019+A11:2021 (clause 5.3).
- Identify and describe hazard-related use scenarios (clause 5.4).
- Select the hazard-related use scenarios for summative evaluation (clause 5.5).
- Establish the user interface specification (clause 5.6).
- Establish the user interface evaluation plan, including formative and summative evaluation (clause 5.7).
- Perform user interface design, implementation, and formative evaluation (clause 5.8).
- Perform summative evaluation of the usability of the user interface (clause 5.9).
Every output from this sequence is collected in the usability engineering file, which is the record EN 62366-1 requires the manufacturer to maintain and a notified body will audit.
A worked example (Test)
Consider a battery-powered handheld measurement device intended for both bedside clinical use and home use by a patient or family caregiver. The team is small. The budget is real but finite. Here is how the EN 62366-1 process plays out in practice.
Use specification. The team decomposes use into seven procedures: commissioning, routine measurement, cleaning, transport, home use by a patient, battery management, and disposal. For each, they write a concrete user profile and environment. Lay users are explicitly named, because GSPR §22 is in scope.
User interface characteristics and hazards. The team reviews the display, the buttons, the status indicators, the audible alarm, and the app pairing flow. They cross-check each element against the risk management file under EN ISO 14971:2019+A11:2021, and add use-related hazards that the earlier risk analysis missed. Cleaning chemical incompatibility, reading errors under low light, and lay user misinterpretation of results all enter the hazard list.
Hazard-related use scenarios. For each significant use-related hazard, the team writes a concrete scenario. "A patient at home with reduced vision interprets a borderline reading as normal and does not call their clinician." The scenarios are specific enough to turn into test protocols later.
User interface specification. The team documents the display layout, the button logic, the audible feedback, the IFU, and the app screens in a single user interface specification. This is the bridge from the early analysis to the concrete design the summative evaluation will test.
Formative evaluation. The team runs three formative rounds. Round one uses a paper prototype with five internal users to pressure-test the interaction flow. Round two uses a working prototype with eight recruited representative users in a conference room set up to resemble a patient's home. Round three uses a production-intent device with five recruited clinicians in a simulated ward. Each round produces a short report with findings, design changes, and a rationale. None of these are summative.
Summative evaluation. The team runs one summative evaluation with 15 recruited representative users, split across clinician and lay user groups, in a simulated environment that matches the use specification. The protocol tests exactly the hazard-related use scenarios selected under clause 5.5. Each task is scored pass or fail against predefined criteria. Use errors, close calls, and use difficulties are recorded with video. The summative report is the evidence of validation.
Usability engineering file. The team collects the use specification, the user interface characteristics, the hazard list, the hazard-related use scenarios, the user interface specification, the evaluation plan, the formative reports, the summative report, and a traceability matrix into a single file. The file tells a story an auditor can follow in under an hour.
This is the full loop. No step is optional under EN 62366-1.
The Subtract to Ship playbook (Ship)
A lean team running the EN 62366-1 process for the first time can close the loop in a single calendar quarter if it respects four principles.
Principle 1. Use specification first, always. Every hour spent on the use specification saves five hours downstream. This is the non-negotiable starting point. The separate post on the MDR use specification covers the detail.
Principle 2. Integrate usability with risk management from day one. EN 62366-1 clause 5.3 is explicit about the cooperation with EN ISO 14971. Run the hazard identification once, jointly, not twice. The use-related hazards belong in the same risk management file as the mechanical and electrical hazards. This is where Tibor's audit finding "parallel files, inconsistent content" most often lands, and it is entirely avoidable.
Principle 3. Formative evaluation is iterative and cheap. Summative evaluation is the expensive validation event. Spend formative evaluation on early, cheap, frequent rounds with small recruited groups. Spend summative evaluation on one well-prepared event with representative users in a realistic or credibly simulated environment. The auditor expectation is not elaborate research. It is one summative event done properly, with documented protocol, recruited users, realistic environment, and recorded results.
Principle 4. Build the usability engineering file as you go. The file is not a final deliverable written after summative evaluation. It is a living record. Add to it at every step. When summative evaluation is complete, the file is 90 percent written, not a quarter of a project on its own.
A startup that follows these four principles will have a compliant, lean usability engineering program. A startup that treats usability as a document exercise at the end of development will spend twice the money and ship late.
Reality Check
Work through each item honestly. A no anywhere in the list is a backlog item, not a verdict.
- Is there a use specification that decomposes use into at least five distinct procedures with named users and environments?
- Is the use-related hazard analysis integrated into the same risk management file as the mechanical and electrical hazards?
- Is there a written user interface specification that an external reviewer could use to re-create the interface?
- Has formative evaluation run at least twice with recruited users outside the company?
- Has summative evaluation been planned with recruited representative users in a realistic or simulated environment, and a predefined pass or fail protocol?
- Are summative hazard-related use scenarios traceable back to the use specification and the risk file?
- Is the usability engineering file a single coherent record, or a folder of loosely related documents?
- If a notified body auditor asked for the file tomorrow, could the team tell the story from use specification to summative result in less than an hour?
Frequently Asked Questions
What is the difference between formative and summative evaluation? Formative evaluation is iterative and used to improve the design. It runs throughout development and has no pass or fail criteria. Summative evaluation is the final validation event. It uses recruited representative users in a realistic or simulated environment to generate evidence that the final design is safe for intended use. Under EN 62366-1, both are required.
Can internal employees be used for summative evaluation? No. Summative evaluation requires recruited representative users who match the user profiles in the use specification. Internal employees know the device too well, are biased in favour of it, and do not represent the real population. In Tibor's audit experience, summative testing with internal staff is one of the most common reasons a usability file is rejected.
How many users are enough for summative evaluation? EN 62366-1 does not fix a number. A common benchmark used in practice and informed by industry guidance is 15 representative users per distinct user group. For a device with two user groups, that means 30 users. For a single user group, 15. The number should be justified in the evaluation plan and defensible against the hazard-related use scenarios being tested.
Is a usability engineering file separate from the risk management file? They are separate records but they must be tightly integrated. EN 62366-1 clause 5.3 requires cooperation with risk management under EN ISO 14971:2019+A11:2021. In practice, use-related hazards live in the risk management file and are cross-referenced from the usability engineering file. The two files must tell a consistent story.
Does the process apply to software-only devices? Yes. EN 62366-1 is standard-agnostic with respect to hardware or software. A software as a medical device must still produce a use specification, user interface specification, formative evaluation, and summative evaluation. For app-based devices, the summative evaluation often runs in a simulated environment on the user's own phone.
Related reading
- MDR Use Specification: Defining Users, Environments, and Profiles goes deep on the foundation document that drives the rest of the process.
- MDR Usability for Electrical Devices: IEC 60601-1-6 and IEC 62366 shows how the electrical safety collateral standard points to EN 62366-1.
- MDR Annex I General Safety and Performance Requirements sets the regulatory anchor for GSPR §5 and §22.
- Post-Market Surveillance Feedback Into Risk Management explains how post-launch use-related feedback re-enters the usability loop.
Sources
- Regulation (EU) 2017/745 on medical devices, consolidated text. Annex I GSPR §5, Annex I GSPR §22.
- EN 62366-1:2015+A1:2020, Medical devices, Part 1: Application of usability engineering to medical devices. Clauses 5.1 through 5.9.
- EN ISO 14971:2019+A11:2021, Medical devices, Application of risk management to medical devices.