Artificial Intelligence in Medical Devices - Part 5 Cybersecurity
By Mitch on Friday, 21 November 2025, 13:59 - Processes - Permalink
After safety risks in the previous post, we continue this series of posts on AI with Cybersecurity risks.
Cybersecurity is vast subject. It can be separated in security at IT level (the organization) and security at product level, in our case: the medical device. Product-level security can be split into training / validation / test phases before release, and deployment / use phases, after release.
Remark: data and model cybersecurity requirement is present in the 5th intent of article 15 of the AI Act.
What doesn't change and what changes
AI or not AI, some characteristics remain:
- Safety impact: while a loss of confidentiality or availability is a concern, it doesn’t necessarily lead to safety risks. Conversely, a loss of integrity will most probably lead to a safety risk.
- Cybersecurity risk management process and threat modelling: either for IT security risks or for product security risks, the processes and methods don't change: ISO 27005 (or another framework) for IT and AAMI SW96 for medical device product.
Conversely, AI brings some new items, hence new challenges:
- Assets: the presence of vast amounts of data, and the presence of AI models.
- Attack scenarios exploits the unique statistical and data-based nature of ML systems. Thus, AI models deserve their own taxonomy:
- For security experts, the Mitre Atlas (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a tool similar to the Mitre Att@ck® framework, to identify possible attack scenarios on AI systems.
- And the NIST AI 100-2e2023 Adversarial Machine Learning document, by the National Institute of Standards and Technology (NIST), defines a possible taxonomy of methods of attacks. New attack scenarios are continuously discovered, new adversarial techniques could be detected and documented in an update of this document.
- More simple than the NIST document, the OWASP Machine Learning Security Top Ten lists the top 10 security issues of machine learning systems.
Especially, the NIST document on Adversarial Machine Learning describes what the attacker might do:
- During the training stage: Taking control of assets,
- The training data, their labels, the model parameters, or the code of ML algorithms,
- Resulting in an impact on integrity, a technique called data or model poisoning,
- During the production stage: sending crafted data to the AI model,
- Resulting in an impact on data or model confidentiality or privacy, with techniques called membership inference and data reconstruction, and techniques called model extraction or prompt injection for generative AI,
- Or resulting in an impact on model output data integrity, with techniques called model evasion, and techniques called model abuse, or prompt injection (again) for generative AI,
- Last, during the production stage: flooding the AI model with multiple queries,
- Resulting in an impact on availability, the classical denial of service (DoS), or with some more specific techniques like energy latency DoS, or prompt injection for generative AI.
IT-level risks
These risks won't necessarily be present in a product risk management file. They should be placed in a IT-system risk management file or even a broader organisation-level risk management file.
These are the risks that a breach in the IT system leads to an impact on AI data or on AI models. Data can be training, AI validation, testing or performance monitoring data.
Since the breach happens in the IT system, it may happen on:
- A development platform,
- A production platform,
- The maintenance / back-office management / administrative IT system.
Since we are at the IT level, the risks are on a the infrastructure, not the products. Thus, the mitigation actions are at IT-level as well. Starting with a cybersecurity policy put in place by the manufacturer.
IT security management system (ISMS)
The only way to fully manage these risks on data and model designs stored in the IT system of the manufacturer (or some contractor) is to apply an ISMS framework: ISO 27001, SOC 2, NIS2 Technical Implementation Guidance, CIS or else.
An embryo of ISMS can be put in place by following the requirements set out in IEC 81001-5-1 clauses:
- 4.1 Quality Management,
- 5.1 Development environment security, and
- 5.8.4 Controls for private keys.
On the product itself, new cybersecurity risks emerge from AI models. We've seen above that lots of attack scenarios are already well-defined for AI models, including gen-AI and LLMs. Thereafter, we leave IT-level risks and consider product-level risk in the different phases of the medical device lifecycle.
Risks in training / AI validation and test phase
Before release, the risks can be identified from the possible adversarial attack methods identified in the NIST document. These attacks are based on the capabilities of the attacker to control assets (model parameters, source code, training data, labels, and test data) seen above.
For example, on the development platform:
Confidentiality: attackers managed to get access to data or models (model design and data, stored on the development platform).
- They may disclose or sell data or ransom the data holder (the manufacturer or data provider) not to do so.
- No impact on safety. But possible economic consequences if the manufacturer was holding personal health data used for e.g. AI model tests.
- Possible impact on device availability if the recovery actions on the IT system require to disconnect the device in production (But this is not linked to the fact that an AI is present).
- Mitigation: for such scenario, mitigations are more at IT level (securing the assets on development platform), rather than product level (product specifications) or product design process (software design SOP).
Availability: attackers erased data either at will or by side effect of their attack:
- The manufacturer may not be able to release a new product or a new version of their product.
- An impact on safety might be possible if a security or safety fix is required, but the manufacturer is unable to release it.
- Mitigation: at IT level, data archiving, data replication.
- Mitigation: at product design SOP level. Data sanitization steps and a design review during the design process might be a way to catch such problems.
Integrity: The attacker corrupted models or data either at will (e.g.: supply-chain attack, and more specific for IA: data poisoning or model poisoning) or by side effect of their attack.
- If the corruption isn’t detected, this may lead to an erroneous product on the market and a significant risk on patient safety.
- Mitigation: once again, for such scenario, mitigations at IT level, like data archiving, are a first layer of protection.
- Mitigation: A second layer of protection in the product design SOP level can be added: one or more data sanitization steps or design reviews to check that the data or model haven't been corrupted.
Remarks:
- The economic impact isn't fully discussed in the above examples. Only the safety impact is fully discussed.
- Risks on data can be exacerbated by the amount of data, and the nature of data with potential privacy impact (fully identifiable, pseudonymised or anonymised).
- The line may be blurred between IT-level risks and product-level risks, especially for supply-chain attacks. Such risks may be present in the product cyber risk management file or the QMS process risk management file or the ISMS risk management file.
We've seen also in the above discussion that mitigations are more at IT level (securing the development platform) or process level (adding reviews and checks in the design process), rather than mitigations at product level (e.g.: product security specifications or test cases specific to the product interfaces). This assertion is true as long as AI models in medical devices are locked in production. Hence, data poisoning in production is unlikely to happen, if no continuous training is possible.
Risks in production / use phase
After release, the risks can be identified from the possible adversarial attack methods identified in the NIST document. These attacks are based on the capabilities of the attacker to query the system in a crafted and adversarial manner.
For example, in production:
Confidentiality: attackers managed to get access to data or models (privacy leak by techniques like model extraction or model inversion or prompt injection).
- They may disclose or sell data or ransom the data holder (the manufacturer or data provider) not to do so.
- No impact on safety.
- Possible impact on device availability if the recovery actions require to disconnect the device.
- Mitigations: they can be placed at product level, with safeguards in the pipeline around the AI model, or when the medical device is a module of a bigger health software platform, in the non-regulated health software part.
Availability: attackers send DoS attacks, such as energy latency attack.
- The device may be unavailable.
- Usually, no direct impact on safety. Unless the unavailable device is used in critical care.
- Side effect: it may not be possible to monitor the device performance.
- An impact on safety might be possible if performance monitoring is unavailable.
- Mitigations: like confidentiality, mitigations can be in the medical device or outside the medical device. We would prefer a mitigation in the device pipeline, if the attack scenario can lead to a safety impact.
Integrity: The attacker corrupts models or data either at will or by side effect of their attack, or, the attacker sends crafted adversarial data able to fool the AI model (techniques like model evasion, model abuse or prompt injection).
- Corrupting models or data by techniques like data poisoning or model poisoning isn't likely in the context of locked-in models, used in medical devices. However, some vulnerabilities might exist, to do so, even if the model is supposed to be locked (imagine an AI model capable of continuous learning in production, with only a poor boolean set to false to lock it...).
- Crafting adversarial examples is an attack scenario more likely with locked models.
- If the corruption isn’t detected, this may lead to an erroneous product on the market and a significant risk on patient safety.
- Mitigations: like in the availability case above, mitigation can be in the medical device or outside the medical device. And we would highly prefer a mitigation in the device pipeline, if the attack scenario can lead to a safety impact. One example of mitigation in the pipeline would be a subnetwork trained to detect adversarial examples.
- It is also possible to have mitigations in the training of the AI model, by adding adversarial examples to the training dataset, in order to make the AI model robust to adversarial attacks. But such techniques are still subject to research efforts and can also decrease the model accuracy.
Methods to identify risks
As usual with risk management, the biggest challenge is to be confident in the correct identification of all significant risks in your risk management file. For AI cybersecurity risks, methods available to identify risks are:
- The aforementioned NIST Adversarial Machine Learning document,
- The OWASP Machine Learning Security Top Ten.
- It's a good way to start and try to assess the likelihood of those most frequent scenarios in your device,
- It has the immense advantage of giving the possible mitigation actions for each of the 10 scenarios,
- Likewise the OWASP Top 10 for LLM applications is oriented to threats specific to LLM.
Mitigation actions
AAMI SW96 Clause 7.1 Security risk control option analysis prefers inherent security by design to mitigation actions added to the medical device or information to the user. That it is rarely the case with AI models, as already discussed in the previous post on safety risks in MDAI.
We've seen above that this concern is repeated for security risk mitigation actions:
- Data sanitization, design reviews, adversarial test cases (e.g.: defensive distillation, which unfortunately comes at the price of high computational costs, decrease of model performance) can be merely seen as safety by design (design process), because there is rarely an option to add something into product design specifications,
- Some safegards on model inputs or on model outputs may be seen as inherent security by design, but the effectiveness of such measures is called into question by the NIST document,
- Mitigation in the IT system can be seen as mitigation actions added to the device or added to the manufacturing process,
- A last type of mitigation action is the surveillance of the AI-model performance. though necessary to detect problems in production, it cannot detect problems before they arise. Surveillance is definitely not inherent safety by design, rather something closer to protective measures added to the manufacturing process
As usual in risk management, there is no silver bullet for AI attack scenarios. The effectiveness of mitigation actions preventing threat scenarios is then generally called into question, in the current state-of-the-art (late 2025). Thus, for cybersecurity risks with a safety impact, like model evasion or model abuse, the poor effectiveness of mitigation actions (once again in the current state-of-the-art) makes it questionnable to incorporate AI models in safety critical medical devices.
Fortunately, and this is the case for lots of medical devices, AI model can be used in medical devices with AI-related risks are of low or moderate profile.
Conclusion
Security risk management for AI in medical devices is not so far from what we know for classical software. At least in the method. It is however radically different in the possible attack scenarios. Hence, it requires people with a background in adversarial AI techniques to manage AI security risks the right way.
Up to know, there is no established standard or guidance, specific to medical devices on this subject. This is why we referred to documents, mainly from NIST or OWASP.
No such draft or project of standard is present in any of well-known repositories (IEC, AAMI, FDA...). This may change in the near future as state-of-the-art is being built.
