TUV.AI Lab whitepaper on ISO 14971 - mind the gap ... or not
By Mitch on Friday, 9 January 2026, 14:09 - Misc - Permalink
The TUV.AI Lab published in November 2025 a white paper on gaps between ISO 14971 and risk management requirements present in article 9 of the AI Act.
Comparison of AI Act article 9 and ISO 14971
The main interest of this document, and the little gem it contains, is the table showing the gaps between risk management from the AI Act viewpoint and from the ISO 14971 viewpoint. The work is similar to the one performed on the comparison of ISO 13485 and AI Act already discussed here.
The comparison unravels both risk management frameworks are quite similar. Taking the paragraphs of the AI Act article:
- 2 gaps on (i) interplay with other AI Act requirements and (ii) on real-world testing,
- 5 partial coverage of AI Act requirements by ISO 14971,
- 10 full coverage of AI Act requirements by ISO 14971.
The biggest difference is taking into account fundamental rights in the risk management plan. A concept not present neither in ISO 14971, nor in MDR/IVDR.
Action plan
The white paper continues by stressing out the need of updating product risk management plans to fully implement risk management requirements of the AI Act. This task should represent a little amount of work, given how close the two frameworks are.
Thanks TUV.AI Lab, your work eases the interpretation of MDR/IVDR and AI Act.
However...
Indecent Proposal
The proposal for MDR/IVDR changes published on the 16th December 2025 put the cat among the pigeons. It contains this proposed change:
In Annex I to the Artificial Intelligence Act, MDR and IVDR are moved from Section A to Section B.
Why this change? The answer is in AI Act article 2.2:
For AI systems classified as high-risk AI systems in accordance with Article 6(1) related to products covered by the Union harmonisation legislation listed in Section B of Annex I, only Article 6(1), Articles 102 to 109 and Article 112 apply. Article 57 applies only in so far as the requirements for high-risk AI systems under this Regulation have been integrated in that Union harmonisation legislation.
Deciphering: In practice, MDs and IVDs are kicked out of the AI Act. Manufacturers are just required to apply article 57 in case of testing in a regulatory sandbox, and perform a surveillance of AI Act changes to see if they will need to apply something in the future.
Consequence: this is why regulatory sandboxes are introduced in the Proposal. This allows to apply AI Act article 57.
Why no more AI Act?
Two possible interpretations:
- The will of simplification advocates to limit the CE marking process to MDR / IVDR. Removing medical devices from AI Act CE marking process is definitely a significant simplification.
- The goals of the AI Act (basically, preserving the fundamental rights of EU citizens) aren't so far from those of the MDR / IVDR safety, performance and favorable benefit / risk ratio:
- One may say it isn't possible to present a favorable ratio, while adversely affecting fundamental rights,
- Likewise explainability or interpretability and trustworthiness or transparency of an AI model enhance the performance of a medical device.
Up to now, we have absolutely on assurance that the text will be kept like this. The change to AI Act Annex I may be discarded. Conversely, more concepts could be added to the MDR and IVDR, for example the inclusion of additional requirements related to AI in the General Safety and Performance Requirements.
There's no need to get lost in conjecture.
Keep calm and wait for the final text voted by the European Parliament.
Consequence for manufacturers: your first emergency action is to do nothing and wait for the final text. Slowing down the MDR/IVDR vs AI Act gap assessment and action plan is probably a quite good decision.
