Cybersecurity Risk Management for Medical Device and Health Software
Medical Devices
4/27/2025
Evolution of Requirements on the Statutory Level
The evolution of cybersecurity requirements for medical devices has been transformative, to say the least. Under the European Medical Device Directive (MDD), cybersecurity was addressed only indirectly, with a single sentence hinting at security considerations. In contrast, the European Medical Device Regulation (MDR), enforced directly in all EU countries since its full applicability in May 2021 (following its 2017 adoption), dedicates four sections to cybersecurity risk management.
This shift is further underscored by the Medical Device Coordination Group's MDCG 2019-16 guidance, which elaborates on cybersecurity requirements across 46 pages, offering detailed recommendations for manufacturers to ensure compliance and manage risks effectively.
Relevance of Industrial Automation and Control Systems
The guidance MDCG 2019-16 advocates a defense-in-depth design approach aligned with industrial best practices rooted in the IEC 62443 standards series for industrial automation and control systems (IACS), which emphasize the secure development lifecycle, system security, defense in depth, joint responsibility and establishing Security Levels (SL).
While MDCG 2019-16 does not directly reference IEC 62443, its principles are adapted for medical devices through complementary standards like IEC TR 60601-4-5 'Medical electrical equipment: Guidance and interpretation – Safety-related technical security specifications'.
This technical report bridges IEC 62443’s security frameworks to healthcare by defining medical device security capabilities such as audit controls, encryption, and system integrity checks.
Security levels enable the definition of IT security measures according to the required level of protection. IEC TR 60601-4-5 defines five levels, ranging from SL 0, where no security measures are implemented, to SL 4, representing the highest degree of protection. The standard highlights the responsibility of network operators and system integrators in establishing and maintaining the appropriate security level.
IEC TR 60601-4-5 specifies security capabilities that enable medical devices to be more easily integrated into medical IT network environment at a given security level.
Medical Device Safety for Patients and Users
Norms for medical devices have to address safety in relationship to information security. The latter entails Confidentiality of data, Availability of resources, and Integrity of systems or data, called the "CIA Triad". In addition to that, identification and authentication control (IAC) and authenticity (as a second element to integrity) are also important foundational requirements.
Hence, IEC TR 60601-4-5 requires the implementation of safety-related elements, such as basic safety, the essential performance to reach the minimum necessary clinical function, and the availability of the medical device even with temporarily reduced IT functionality of the medical device in case of an attack on the IT network
MDCG 2019-16 focuses on the practical implementation of these concepts through risk management strategies and layered protections, reflecting the growing convergence of industrial cybersecurity practices with medical device regulation. Key elements are:
- Adoption of a security risk management process
- Security as integral part of device safety and performance
- Clear allocation of responsibilities and training resources
- Vulnerability and patch management
- Threat modeling
Evolution of Standards in Medical Device Cybersecurity
Complementing these regulatory advancements, there have been major standardization efforts, extending the scope of cybersecurity requirements.
IEC 81001-5-1
The security standard IEC 81001-5-1 'Health software and health IT systems safety, effectiveness and security – Part 5-1: Security – Activities in the product life cycle', released in 2021, provides a comprehensive framework for integrating cybersecurity throughout the software lifecycle of medical devices and health apps, building on existing standards like IEC 62304 'Medical device software - Software life cycle processes' and ISO 14971 'Medical devices - Application of risk management to medical devices'.
IEC 81001-5-1 is best understood as an extension and supplement to IEC 62304, specifically addressing cybersecurity in the software lifecycle for health software and medical devices. The structure and process model of IEC 81001-5-1 closely align with IEC 62304, making it familiar for manufacturers and software engineers already working with IEC 62304.
The development of IEC 81001-5-1 again reflects the convergence of industrial cybersecurity frameworks with healthcare-specific needs, addressing vulnerabilities in connected medical technologies like insulin pumps, pacemakers, and diagnostic software.
IEC 81001-5-1 establishes cybersecurity requirements for medical devices and health software and IT systems across the entire device lifecycle, from development and deployment to Post-Market Surveillance (PMS) and maintenance. It overcomes deficiencies of the IEC TR 60601-4-5 that did not cover PMS.
IEC 81001-5-1 mandates secure design practices, risk management, and vulnerability mitigation by integrating cybersecurity into every phase, including development, testing, updates, and decommissioning.
The IEC 81001-5-1 standard complements the MDR and MDCG 2019-16 by providing actionable guidance for implementing state-of-the-art security measures. They comprise secure coding practices, third-party component validation, and threat modeling, among others. It emphasizes continuous monitoring, requiring manufacturers to address vulnerabilities during operation and ensure healthcare providers receive clear security guidelines for safe deployment in the environment where the device is in use.
Concrete Recommendations from IEC 81001-5-1
The manufacturer must connect the perspectives mentioned above through the intended use of the product and its security objectives: Confidentiality, Integrity, Availability (CIA). This means that every security measure (e.g., multi-factor authentication) must be balanced against the primary medical function of the device. In emergency scenarios (e.g., emergency operations), security objectives may be temporarily reduced as long as patient safety is ensured.
Specific Manufacturer Obligations include:
- Continuous monitoring of vulnerability databases and threat intelligence
- Structured preparation of security updates with consideration of CIA
- Proactive communication with operators about patch implementation and compensatory measures
- Integration of security activities into post-market surveillance (e.g., repeated penetration tests)
For legacy software, which can no longer be updated ("transitional health software"), Annex F of the standard outlines an escalation path:
- Technical countermeasures (e.g., closing ports, malware scans)
- Organizational adjustments (e.g., network segmentation, access logging)
- Restriction of intended use (e.g., deactivation of cloud functions)
- Functional reduction as a last resort
In terms of security testing, the standard differentiates between three levels:
- Security Capability Testing: Targeted testing of implemented protective mechanisms (e.g., encryption validation)
- Vulnerability Testing: Exploit-based attacks on identified vulnerabilities
- Penetration Testing: Hypothesis-free exploration of attack vectors by independent teams whose career progression must not depend on test results ("impartiality principle"). The standard requires timely repetitions but leaves interval definitions up to the manufacturer.
Critical Implementation Aspects
- A component vulnerability (e.g., in an open-source library) does not necessarily constitute a product vulnerability – what matters is whether the affected function is actually used in clinical practice.
- While not a normative requirement, creating a Software Bill of Materials (SBOM) is essential for efficient CVE monitoring.
- Annex C of the standard recommends threat modeling methods such as STRIDE or attack trees to systematically identify attack surfaces.
Finally, trust-based communication among all stakeholders is the foundation for improvement efforts: manufacturers must provide authorities with comprehensible security evidence, operators require clear operational instructions, and notified bodies must transparently document test methods. This approach necessitates lifecycle-wide integration of cybersecurity into quality management processes per ISO 13485.
Notably, IEC 81001-5-1 deliberately avoids prescribing specific processes and instead defines only required activities.
ANSI/AAMI SW96
Similarly, the US-driven ANSI/AAMI SW96 'Standard for medical device security - Security risk management for device manufacturers' published in 2023 emphasizes security risk management across the entire lifecycle of medical devices. SW96 elevates two Technical Reports AAMI TIR57 (pre-market risk management) and AAMI TIR97 (post-market risk management) into enforceable requirements, moving beyond the "should" recommendations of TIRs to "shall" obligations.
Collaboration and Convergence
This shows the direction of enforcement of requirements prevalent in cybersecurity. As evidence of the convergence of US and European initiatives, IEC 81001-5-1 advocating secure development lifecycle processes complements the cybersecurity risk management framework of AAMI SW96.
Both standards were developed through collaboration between international experts, with IEC 81001-5-1 accepted by the FDA as a consensus standard for secure product development. IEC 81001-5-1 is under review by the EU to become harmonized with the MDR, expected in 2026. European legal manufacturers should adopt this standard as it is considered state of the art already now.
FDA and Cybersecurity for Medical Device
The FDA has issued multiple guidances addressing both pre- and post-market cybersecurity considerations. During the development manufacturers must implement a Secure Product Development Framework (SPDF) to embed cybersecurity into device design, covering threat identification through a Threat Modeling approach, subsequent risk assessment, mitigation strategies, and security testing to preempt vulnerabilities impacting patient safety.
Pre-market submissions to the FDA require documented risk management plans demonstrating proactive threat analysis, mitigation measures, and testing outcomes to address potential device malfunctions or harm. This is highlighted in the FDA Cybersecurity Guidance 'Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions', with the latest update published in June 2025. A Software Bill of Materials (SBOM) lists all software components (including third-party/open-source), while devices must meet security objectives like confidentiality, integrity, availability, and patchability through verified design controls.
Finally, submissions must include evidence of vulnerability testing (e.g., penetration tests) and validation of security controls to ensure resilience against cyber threats.
The FDA Cybersecurity Guidance is well aligned with AAMI SW96 and fosters a proactive approach to threat mitigation.
Views
This FDA Guidance has the concept of Views, with the purpose to support the structure of the software architecture documentation:
- Global System View: Maps device interfaces, external connections (e.g., cloud, networks), and system boundaries.
- Multi-Patient Harm View: Assesses risks of simultaneous compromise across interconnected devices (e.g., hospital infusion pump networks).
- Updateability/Patchability View: Details end-to-end processes for secure software updates, including third-party dependencies.
- Security Use Case View(s): Identifies threat scenarios (e.g., unauthorized access, data breaches) and corresponding mitigations
These Views are helpful when preparing diagrams of the system architecture, interfaces and communication protocols, as part of the pre-market submissions to the FDA.
Post-Market Surveillance in the US
The FDA expects medical device manufacturers to engage in several key post-market cybersecurity activities to ensure the ongoing safety and security of their devices. These include vulnerability monitoring, where manufacturers proactively track relevant information from threat intelligence and vulnerability databases. They can also utilize coordinated vulnerability disclosure mechanisms both to receive and report any vulnerabilities. Risk assessments must be conducted regularly to evaluate the impact and exploitability of identified vulnerabilities, with appropriate mitigations implemented to address risks.
Manufacturers are also required to provide timely software updates and patches, addressing both routine and critical vulnerabilities to maintain device security. Additionally, the FDA emphasizes the importance of incident response planning and information sharing with stakeholders, such as healthcare providers, to manage cybersecurity events effectively and collaboratively throughout the device's lifecycle.
Post-Market Surveillance in Europe
Under the EU MDR and the MDCG guidance, manufacturers must implement continuous cybersecurity monitoring as part of PMS. Key obligations include:
- Risk Management Updates: Regularly update cybersecurity risk assessments and mitigation strategies throughout the device lifecycle, addressing newly identified vulnerabilities or threats.
- Proactive vulnerability management identified from publicly available sources (e.g., via ENISA alerts or CVSS from first.org) and coordinated, risk-based patching/update activities.
- Incident Reporting: Report severe cyber incidents or actively exploited vulnerabilities impacting device security to the Competent Authorities and coordinated disclosure platforms.
- Software Bill of Materials (SBOM): While the MDR lacks explicit SBOM terminology, manufacturers must demonstrate equivalent traceability through risk management documentation. The SBOM has become a de facto requirement to comply with MDR’s cybersecurity obligations. Without detailed knowledge of software components or libraries used in the medical device vulnerability tracking and rapid response to exploits is not possible.
Cybersecurity as Part of Risk Management
In general, medical devices must be designed and manufactured to reduce risks, including cyber risks, as far as possible. This includes risks arising from the interaction between software and the IT environment. EU regulations explicitly require medical device manufacturers to implement risk management, which must address risks stemming from cyber threats.
For products that include software or standalone software, manufacturers must adhere to state-of-the-art practices during development and production. This involves following a defined software lifecycle, integrating information security into development, providing necessary information to maintain security during operation, and conducting appropriate verification and validation of the (software) product.
Key Differentiators between ISO 14971 or IEC 81001-5-1
- Security assessment strategy: IEC 81001-5-1 expects type and frequency of various activities (testing, code analysis etc.) to be defined upfront with a security risk management review. Explicitly required is the Review of Security Risk Management throughout the entire lifecycle, but not in ISO 14971.
- Lifecycle Coverage: Extends risk management to decommissioning and software updates. Unlike ISO 14971, IEC 81001-5-1 and AAMI TIR97 explicitly state this.
- Third-Party Focus: IEC 81001-5-1 requires assessing software supply chain risks (e.g., vulnerable libraries) using design requirements like SBOMs. AAMI TIR97 mandates monitoring third-party component vulnerabilities. ISO 14971 does not specifically mention the software supply chain.
- Interaction with safety risk analysis: IEC 81001-5-1 explicitly addresses the interaction of safety risk analysis with security risk management. A security risk or its successfully implemented control can create a new safety risk or elevate an existing risk. For example, a security patch (control) could introduce safety risks (e.g., latency). Likewise, a safety risk measure such as an uninterrupted power supply in case of a power outage can lead to security risks.
It is important to maintain full traceability for each identified security vulnerability to its security risk analysis and evaluation, implementation and verification of the control measures. A separate evaluation of the security residual risk is also mandated.
Real-World Cybersecurity Challenges
To ground the discussion in real-world challenges, this section leverages findings from Germany’s Federal Office for Information Security (BSI), whose cybersecurity assessment of medical devices "ManiMed" revealed critical vulnerabilities [1]. This is especially the case in networked medical devices and those with supporting infrastructure such as App connectivity.
Penetration testing on marketed medical devices was performed for 10 devices from 5 different product categories. Here are two examples of vulnerabilities:
- Weak PINs and lack of authentication in devices such as insulin pumps using Bluetooth Low Energy (BLE) in the communication with the Smartphone App, where an attacker with physical access to the pump and guessing (brute-forcing) the PIN can unlock a locked pump, change the pump’s configuration, and administer an insulin bolus, which may lead to serious patient harm. In some cases even guessing is not necessary, as the PIN can be eavesdropped by an adversary over an unauthenticated BLE interface.
- Lack of encryption certificate approval between the medical device and the surrounding infrastructure on devices like patient monitoring, where a patient monitor does not adequately check certificates for potential revocation, allowing attackers with access to a trusted certificate to obtain a man-in-the-middle position between the patient monitor and the server application. This position could be used to crash the patient monitor (Denial of Service) or possibly even modify transmitted data.
The New IEC/TS 81001-2-2:2025
This updated document of the IEC/TR 80001-2-2:2012 describes modern security-related capabilities for health software and IT systems. Where its predecessor laid a foundation for hospitals and device manufacturers to address the risks of operating devices in IT networks, now cloud, on-premise, or hybrid environments are covered. Its biggest contribution is to ensure the same language is spoken within the community of manufacturers, to HDOs, and IT vendors, all the way through the software life cycle.
Legacy devices
The medical devices that were investigated by the BSI [1] were brought to the market in 2014 or later. However, it is common that legacy devices such as CT or MRI scanners are initially operated for 8 years in a hospital, and are then sold on to third countries and still run there for another 8-10 years. The underlying operating systems (Microsoft or Linux) are not designed for such long-term support. This can lead to the exploitation of known vulnerabilities, where patching is not possible anymore.
Security Risk Management Plan (SRMP)
The AAMI SW96:2023 standard provides the most comprehensive and up-to-date framework for developing a Security Risk Management Plan (SRMP). This standard stipulates that the SRMP shall be established in accordance with a structured security risk management process that mirrors the ISO 14971 approach to patient safety, but with specific security considerations.
The process includes these key elements:
- Security risk analysis: Identifying assets, vulnerabilities, and potential threats across the device lifecycle.
- Security risk evaluation: Establishing assessment strategies and testing processes.
- Security risk control: Implementing and verifying effective security measures.
- Evaluation of overall security residual risk acceptability: Determining if remaining risks are acceptable
- Security risk management review: Ensuring the plan has been properly implemented.
AAMI SW96 mandates continuous plan revisions based on post-market surveillance data and requires verification of the completeness of all security risk control activities. Additionally, management must periodically review the suitability of the entire security risk management process, including the criteria established for security risk acceptance.
This structured approach ensures that security considerations are integrated throughout the entire medical device lifecycle, from design through post-market surveillance.
Common Vulnerability Scoring System (CVSS)
The severity of vulnerabilities is commonly assessed using the Common Vulnerability Scoring System (CVSS), an industry standard maintained by FIRST [2]. While CVSS incorporates temporal and environmental metrics, practitioners often rely solely on the Base Score, a static assessment of intrinsic vulnerability characteristics that remains consistent across time and environment.
The Base Score comprises the following metrics: Attack Vector (AV), Attack Complexity (AC), Privileges Required (PR), User Interaction (UI), Scope (S), Confidentiality (C), Integrity (I) and Availability (A). Values are assigned by estimation resulting in a vector string, e.g.: CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:C/C:L/I:L/A:H. Each metric value has a corresponding numeric value. These numeric values are used in a formula to calculate the Base Score, which can range from 0.0 to 10.0, where 0.0 means no severity (None) and 10.0 means the highest severity (Critical).
CISA Known Exploited Vulnerabilities (KEV) Catalog
The CISA KEV Catalog [3] represents a shift in vulnerability management from theoretical risk to actual, observed threats. By focusing on vulnerabilities with confirmed exploitation in the industry, it helps organizations prioritize remediation efforts based on real-world attack patterns rather than just CVSS severity scores:
- The KEV catalog focuses specifically on vulnerabilities that are actively exploited, rather than those based solely on CVSS or theoretical risks.
- It provides actionable information, including CVE IDs, descriptions of vulnerabilities, and recommended remediation steps.
Examples of vulnerabilities recently added to the KEV catalog include:
- CVE-2025-22457: A stack-based buffer overflow vulnerability affecting certain network Gateways. This vulnerability poses significant risks due to its frequent exploitation by malicious actors.
- CVE-2024-6796: Improper Access Control in Baxter Connex Health Portal with a CVSS Score: 8.2 (High Severity), with the impact that unauthorized users could access sensitive patient/clinician data, or they could modify or delete clinic details, disrupting care workflows.
Security Risk Management Report (SRMR)
The Security Risk Management Report (SRMR) of AAMI SW96 is a critical document for medical device software security that guides implementation throughout the design process. The SRMR should contain:
- Ecosystem diagram showing topology with interfaces and data flows
- Threat modeling (e.g., STRIDE) to decompose and analyze the system
- Vulnerability scoring methodology (e.g., MITRE's CVSS v3.1)
- Prioritized vulnerabilities identified during threat modeling
- Safety and performance impact assessment of security risks
- Pre- and post-mitigation scores for each threat category
- Risk mitigation measures for prioritized cybersecurity risks
The risk mitigation section should include security-by-design elements such as:
- Hardware-based safety mechanisms (e.g., fail-safe modes in dialysis machines when sensors conflict)
- OS hardening procedures for the operating system and commercial software
- Physical security controls
Additionally, Process Threat Modeling should address the entire lifecycle:
- Supply chain
- Manufacturing
- Deployment
- Maintenance/updates
- End-of-life considerations
This comprehensive approach ensures security is integrated throughout the device lifecycle, aligning with FDA guidance and industry standards for medical device cybersecurity.
Security Risk Management Plan (SRMP) Integration Requirements
The FDA allows a standalone SRMP but requires integration with the broader risk management process. Key points from FDA cybersecurity guidance:
- A security risk management report (e.g., per AAMI TIR57) is sufficient for premarket submissions.
- The SRMP must document traceability to safety risk management (ISO 14971) and other quality system elements.
- Security risks impacting safety (e.g., vulnerabilities causing patient harm) must be included in the overall risk management file.
ANSI/AAMI SW96 Requirements for the SRMP
The standard explicitly supports a separate but coordinated SRMP:
- Security risk management is a parallel process to safety risk management, with distinct steps (analysis, evaluation, control).
- Requires cross-communication between security and safety teams:
- A standalone SRMP is acceptable if it includes explicit linkages to the ISO 14971 risk management process
Similar statements can be made for the Security Risk Management Report.
Guidance of IEC TR 80002-1 for Risk Management for Medical Devices
IEC TR 80002-1 provides guidance on applying ISO 14971 (risk management for medical devices) to medical device software. It clarifies how software-related risks should be analyzed and controlled, emphasizing integration with system-level risk management.
(This document was published in 2009 and is now considered somewhat dated, especially in the fast-evolving field of medical device software risk management. It is still referenced and provides useful foundational guidance for applying risk management to medical device software. However, it is not regarded as fully “state of the art” in 2025, also because it does not explicitly cover intentional exploits such as SQL injection or security controls, e.g., encryption, authentication.)
The standard extends ISO 14971’s risk management framework to software, treating it as part of the broader medical device system. It focuses on safety-related risks arising from software anomalies.
The software itself is not a hazard, but can contribute to hazardous situations through failures or unintended behaviors. IEC TR 80002-1 provides guidance for applying ISO 14971’s risk management framework to medical device software.
Anomalies and their Probability of Occurrence
Estimating the probability of software anomalies that might contribute to hazardous situations is inherently difficult. Software does not fail randomly due to wear and tear. Instead, software anomalies are systematic: they are inherent design or coding defects that remain latent until triggered under specific conditions.
As a result, software risk analysis focuses on identifying potential problematic functionality and anomalies that could lead to hazardous outcomes, rather than attempting to assign statistical probabilities. For software, risk assessment is driven by the severity of possible harm, since defects either exist or do not, unlike hardware failures, which are typically random and quantifiable by failure rates.
When systems consisted mainly of electromechanical components, thorough testing could identify and eliminate design errors before the devices were used in practice. Once in operation, most failures were random wearout events predictable by reliability engineering. In those cases, ensuring reliability generally meant that safety could also be assured.
Software design is abstracted from its physical realization. While the hardware on which the software is executed may fail, the design itself does not fail. Software by itself is not safe or unsafe. Safety depends on context, as we also saw in the Therac-25 example (see below).
According to IEC TR 80002-1 there are two types of anomalies:
- Unforeseen software responses to inputs (errors in the specification): Occurs when software behavior deviates from intended functionality due to incomplete, ambiguous, or incorrect requirements. Examples include missing edge cases in alarm logic, unanticipated user interaction sequences, passing of a bad argument to a function.
- Incorrect coding (errors in implementation): Arises from mistakes in translating specifications into executable code, such as improper error handling, race conditions, or buffer overflows. For example, the root-cause analysis [4] of the serious to six people, resulting in deaths and serious injury, caused by the radiation therapy device Therac-25 revealed a coding error where operator inputs and machine states became desynchronized, causing an inconsistent state with a race condition.
Another example is that a cardiac fluoroscopic X-ray system continued emitting radiation when a hardware defect caused the image to freeze. The software’s watchdog mechanism failed to detect the fault, leaving operators unaware of the danger. Another software defect was caused by an overflow of an 8-bit variable, so that the device switched from a low to high-energy radiation.
Ann Marie Neufelder [5] points out that from her long-standing consultation experience in various industry segments of software products there are three basic things why software fails:
- The software specifications are missing crucial important details
- The software specifications themselves are faulty and hence the code is faulty
- The software engineer may not always write code according to the specifications, because some specifications aren't coded at all, or some are coded incorrectly
As Ann [5] writes: "Software failures stem from defects originating in specifications, design, code, or interfaces and surfacing only when specific conditions and inputs trigger them."
Extending IEC TR 80002-1 to Software Security Risk Management
While IEC TR 80002-1 does not explicitly address cybersecurity threats (e.g., malware, unauthorized access), its principles for analyzing and controlling software-related risks are indirectly relevant to cybersecurity.
It distinguishes between:
- Specification errors (unforeseen software responses to inputs due to incomplete/incorrect requirements)
- Implementation errors (coding flaws in translating specifications to code).
This necessitates:
- Requirements validation: Ensuring specifications align with clinical needs and safety constraints.
- Code verification: Using static/dynamic analysis, testing, and inspections to detect deviations from specifications.
- Treating security vulnerabilities as a subset of software anomalies (e.g., improper access control in a Health Portal, CVE-2024-6796)
- Applying secure-by-design principles (e.g., input validation, code verification) to mitigate risks from unforeseen inputs or implementation errors
Integration with Cybersecurity Standards
- IEC TR 80002-1 is often used alongside standards like IEC 62443 and IEC 81001-5-1, which explicitly address cybersecurity controls such as network segmentation, encryption, and threat modeling.
- For example, FDA guidance references IEC TR 80002-1’s anomaly taxonomy (specification vs. implementation errors) when evaluating risks from software changes affecting input handling or low-level code.
Practical Implications
- Risk Control Measures: Hardware-based mitigations (e.g., cryptographic modules) and secure coding practices (e.g., static analysis) align with IEC TR 80002-1’s emphasis on verifying software integrity.
- Post-Market Surveillance: Continuous monitoring of vulnerabilities (e.g., via CISA KEV Catalog) complements the standard’s requirement to update risk assessments based on field data.
The IEC TR 80002-1 framework for systematic risk analysis provides a basis for integrating security-specific controls into medical device software development. Manufacturers should combine it with cybersecurity standards and threat modeling (e.g., STRIDE) for comprehensive risk management.
Fault Tolerance in Medical Device Software
Fault Tolerance is an important attribute of good software. Devices must provide:
- Essential Performance Preservation: Critical functions (e.g., infusion pump flow rate, defibrillator energy delivery) must remain within safe parameters even during a fault.
- Self-Testing Capabilities: Automated checks for failures (e.g., watchdog timers, memory tests)
Single-Fault Safety
The concept of single-fault safety in medical device risk management ensures devices remain safe even when a critical component fails or an abnormal condition occurs. Rooted in standards like IEC 60601-1 and ISO 14971, it addresses both hardware and software failures to prevent unacceptable risks to patients or users:
- Single-Fault Condition: A scenario where one protective measure fails (e.g., sensor or processor malfunction) when an external abnormal condition arises (e.g., power surge).
- Single-Fault Safe: A device maintains safety (no unacceptable risk) during its expected service life despite such a fault.
- Mitigated Risk: Processor failure (e.g., memory corruption) won’t cause overdose.
Implementation Strategies for Single-Fault Safety
- Redundancy: An implementation could be a design of primary and secondary processors running identical control algorithms (e.g., redundant pressure sensors in dialysis machines), and a voting logic detecting a mismatch, rendering the system to a safe state.
- Fail-Safe Mechanisms: Mechanical overrides (e.g., hardware-enforced venous line clamps).
Compliance Challenges
- Latent Faults: Undetected failures (e.g., silent memory corruption) require probabilistic analyses to estimate failure rates.
- Complex Systems: Devices like MRI scanners demand layered protections due to interconnected subsystems.
- Documentation: Manufacturers must prove single-fault safety
Usability-Related Risks
The Therac-25 case [4] illustrates how poor usability design and neglect of real-world operator workflows can lead to catastrophic outcomes. The manufacturer’s interface design ignored critical aspects of the use context, creating a mismatch between operator expectations and system behavior:
- Cumbersome Data Entry: Operators had to manually enter treatment parameters (e.g., mode, energy level, dose) into a terminal, cross-checking them against physical machine settings.
- Ambiguous Feedback and Situation Awareness: The interface displayed cryptic error messages like “Malfunction 54” without clear explanations. Operators routinely encountered such errors (5–10 per day) and treated them as low-priority “false alarms,” unaware they could indicate lethal overdoses.
- Lack of Safety-Critical Feedback: the interface provided no real-time verification of hardware states (e.g., turntable position) or software-controlled safety interlocks. Operators could not confirm whether the machine’s physical configuration matched the software settings.
Human Factors Engineering Today: Preventing Past Mistakes
Modern Human Factors Engineering (HFE) employs rigorous methodologies to avert the types of usability catastrophes seen in the Therac-25. Key elements include:
- Formative and Summative Usability Testing: Formative testing occurs throughout the design process to identify and rectify usability issues early. Methods include heuristic evaluation, cognitive walkthroughs, and think-aloud protocols with representative users.
- Summative testing validates the final design’s safety and effectiveness. It involves simulated use studies and high-fidelity prototypes to ensure the device meets user needs without introducing new hazards.
- User-Centered Design Principles: Prioritizes intuitive interfaces with clear feedback, explicit error messages, and visual indicators of system status. It emphasizes workload management and situation awareness, reducing the cognitive burden on operators.
- Defensive design elements to prevent unsafe actions and detect potential errors.
These HFE practices, grounded in evidence-based research, help minimize the likelihood of usability-related accidents in safety-critical medical devices.
Why Software Defects Lead to Cybersecurity Risks
The key to integrating cybersecurity and software risk management lies in recognizing that software defects can be exploited to cause unintended behavior and provide attackers with pathways to compromise system security. Here’s how these domains intersect:
- Vulnerabilities as the Common Ground: Both cybersecurity and software risk management aim to identify and mitigate vulnerabilities. In software risk management, a vulnerability is a flaw in the code that can lead to system failure or unintended behavior (e.g., race conditions in the Therac-25 leading to overdoses). In cybersecurity, a vulnerability is a weakness that can be exploited by attackers to gain unauthorized access, disrupt operations, or steal data (e.g., buffer overflows that allow remote code execution). For example, error handling routines are often a source of failures, and those can be exploited.
- Software Anomalies as Attack Vectors: Software anomalies (bugs, defects, errors) can create entry points for cyberattacks. An attacker can exploit a software anomaly to bypass security controls, inject malicious code, or gain elevated privileges. A buffer overflow in a medical device's communication software could be exploited to take control of the device remotely, potentially delivering incorrect dosages or disrupting its operation. Anomalies also affect confidentiality, integrity, and availability of the device.
- Risk Assessment Synergies: Traditional software risk assessments (e.g., Hazard Analysis, FMEA) should be expanded to include cybersecurity threats. Cybersecurity risk assessments (e.g., threat modeling, vulnerability scanning) should consider the potential impact on software functions and safety. Combine risk assessment methodologies in one.
- Integrated Risk Mitigation Strategies: Security controls (e.g., authentication, authorization, encryption) can prevent attackers from exploiting software vulnerabilities. Software engineering best practices (e.g., secure coding, code review, static analysis) can reduce the likelihood of introducing vulnerabilities in the first place, by implementing defense-in-depth strategies that address both functional and cybersecurity risks.
Safety Risk Analysis vs. Security Risk Analysis
Practically, external attacks on devices used in clinical settings are considered more likely than attacks on underlying software platforms (e.g., Windows) or the software build processes. Particular attention is given to Commercial Off-The-Shelf (COTS) software, whose known vulnerabilities are insufficiently addressed in MDCG Guidance 2019-16. The guidance does differentiate between attack scenarios originating "from outside" versus "from inside."
There are two complementary risk analyses, conceptually:
- Safety Risk Analysis (Inside-Out Perspective): this analysis investigates how device malfunctions (e.g., software bugs) could cause harm to patients or the environment. Example: Could a defective algorithm in an infusion pump lead to overdosing?
- Security Risk Analysis (Outside-In Perspective): this analysis examines how external factors (e.g., user errors, targeted cyberattacks) could cause unintended changes to the device. Example: Could a weakly secured maintenance port for the service technician allow manipulation of CT control parameters?
Why Complete Security Isn’t Achievable
- Evolving Threats: Cybersecurity risks constantly change due to new attack vectors, vulnerabilities, and advanced adversaries. Software gets older and becomes more vulnerable over time.
- Resource Constraints: Over-securing a device can compromise usability, cost-effectiveness, or clinical functionality.
- Zero-Day Vulnerabilities: Unknown flaws in software/hardware make absolute protection impossible.
Summary
This article examines the evolving landscape of cybersecurity risk management for medical devices and health software, highlighting how recent regulations and international standards, such as IEC 81001-5-1, IEC TR 60601-4-5, and AAMI SW96, drive integrated risk mitigation across the entire product lifecycle.
Key industry practices, including secure development, SBOM traceability, vulnerability management, and post-market surveillance, are discussed. Real-world case studies and regulatory requirements illustrate the convergence of safety and security and emphasize the importance of harmonizing risk management strategies.
Readers gain actionable insights into lifecycle-wide cybersecurity, practical compliance, and the challenges posed by legacy systems and new threats. The article helps to navigate technical, regulatory, and operational demands for trustworthy and resilient medical technologies.
Sources:
[1] Cyber Security Review of Network-Connected Medical Devices - BSI-Project 392: Manipulation of Medical Devices (ManiMed), 2020, https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/DigitaleGesellschaft/ManiMed_Abschlussbericht_EN.html
[2] Forum of Incident Response and Security Teams (FIRST), https://www.first.org/cvss/v4-0/
[3] CISA Known Exploited Vulnerabilities (KEV), https://www.cisa.gov/known-exploited-vulnerabilities-catalog
[4] An investigation of the Therac-25 accidents, by Nancy Leveson and Clark S. Turner, IEEE Computer, July 1993
[5] Effective Application of Software Failure Modes Effects Analysis, Ann Marie Neufelder, 2nd Edition, Softrel LLC, 2017, Quanterion Solutions Inc.
Recommendation for a list of relevant documents (standards, technical reports etc.) can be found here, collected by Nadica H. Lechner: https://www.linkedin.com/pulse/medical-device-cybersecurity-standards-technical-hrgarek-lechner/?trackingId=uiP69OrOST%2B0l80AlnWZdw%3D%3D&lipi=urn%3Ali%3Apage%3Ad_flagship3_pulse_read%3BRbLq5XiGROabTEgQ%2BMbtFA%3D%3D
Recommendation for a list of books and articles can be found here, collected by Nadica H. Lechner: https://www.linkedin.com/pulse/medical-device-cybersecurity-books-publications-hrgarek-lechner/?lipi=urn%3Ali%3Apage%3Ad_flagship3_pulse_read%3BRbLq5XiGROabTEgQ%2BMbtFA%3D%3D
Go back